Apple is a colossus. Some of us might remember back when it was doomed and nearly bankrupt, but these days, it generates hundreds of billions of dollars in revenue every day and has a market capitalization of more than three trillion dollars.
And yet even the most powerful companies are fallible. Often they have a hubris that suggests that their success in one area means they can easily extend it to others—often with disastrous results.
This week, Apple got another reminder that even its mighty power might not be able to make it succeed at all its ambitions. Don’t be embarrassed, Apple — even the world’s most beautiful models still get pimples from time to time.
This apparently happened just before WWDC, so I missed it, but it deserves everyone’s attention. Apple has rejected the UTM emulator app, not just from the global App Store, but from all third-party App Stores in the EU by refusing to notarize it. Michael Tsai’s site has the details:
This also seems inconsistent with the fact that the Delta emulator is allowed to be notarized outside the App Store. It doesn’t make much sense for the rules to be more lax within the App Store.
Apple needs to read the terms of the DMA again; Apple can’t reject UTM from distribution in third party marketplaces, in just the same way it can’t prevent Epic from building an App Store. App Review is going to land them yet another clash with the EU, and potential fine-worthy rule violation.
In other words, parts of Apple apparently think that they can enforce inconsistent and arbitrary rules even outside the App Store, which is contrary to the entire regulatory process that led to the DMA and the concept of alternative App Stores in the first place. (This also happened to the iDOS emulator.)
On top of all that, it took Apple two months to come to the decision. (AltStore developer Riley Testut has said that Apple is taking ages to notarize any apps for his marketplace.)
The whole point of the DMA is that Apple does not get to act as an arbitrary approver or disapprover of apps. If Apple can still reject or approve apps as it sees fit, what’s the point of the DMA in the first place?
The Summer of Fun has arrived, but we’ve still got a lot of catch-up work to do after a huge WWDC. So this week we share more thoughts about Apple’s new AI strategy and discuss a lot of the feedback we got about what happened last week.
The change goes into effect starting today, Apple says. Existing users with open Apple Pay Later loans will still be able to manage them via the Wallet app.
In its place, Apple is focusing on new features coming globally to Apple Pay later this year, including the ability to access installment loan offerings from eligible credit or debit cards, as well as Affirm.
Miller has a full statement from Apple in his piece, but what I wondered was whether this was the fastest a feature had been shipped and then canceled. The feature was first announced in March 2023, meaning it’s been around for just fifteen months.
It seems likely this feature was not much used, especially since it competes with lots of other similar (and more well established) features. Also worth noting, the loans were backed by Apple itself, via a subsidiary called Apple Financing, LLC—which at the time seemed to only handle this “buy now pay later” feature.1
Apple also recently announced during its WWDC keynote that it will incorporate loyalty/reward programs into Apple Pay as well as installment programs through your own bank. Perhaps the writing was on the wall at this point.
In any case, while Apple may have fancied itself a more reasonable purveyor of this particular service, I also can’t fault them for perhaps not wanting to be in a business that can feel a bit troublesome.
Fun fact: while digging around in corporate records, I discovered that Apple Financing LLC is listed as a branch of “Bespin Capital LLC.” I can only assume they specialize in financing tibanna gas mines. ↩
Vulture’s Josef Adalian joins Jason to discuss the fate of Paramount after the latest deal has fallen through, Warner Bros. Discovery moves to plan B on sports rights, and Joe walks us through his Vulture streamer rankings.
It’s official! Apple has AI! (Coming later.) Its other new operating system features also made an appearance at the WWDC keynote and if you see a Microsoft employee this week, give them a hug. They might need it.
Bandwagon = joined
OK, yes, we’re going to have to talk about AI again, but I’m pretty sure this is the last time we’ll have to.
Apple unveiled a range of AI offerings starting with Apple Intelligence—a collection of features done via both on-device learning and through secure cloud-based processing—and ending with the ability to pass queries off to ChatGPT when only a demonstrably wrong answer will do.
Reaction to Image Playground, a feature that provides AI-generated images in response to prompts, seems to be a unanimous blech, largely based on the generic DALL-E-looking output, but also on the input. Hope you don’t use “the open web” because, like so many other AI companies, Apple appears to have helped itself to whatever works you might have put out there in order to train its system. Don’t worry, though. You can opt out now, after all the five-legged AI-generated horses have bolted.
It’s currently not clear exactly how Apple is applying what its Applebot learned from reading the entire web. If it’s just teaching it how to talk, that’s less bad than teaching it what to say. But clearly something went into training Image Playground how to make those images no one seems to like very much.
A big question on many minds is, will Apple Intelligence hallucinate? Sure, it will. Don’t we all? I know I do. What? Who said that? But Apple says it did its best.
Cook: It’s not 100 percent. But I think we have done everything that we know to do, including thinking very deeply about the readiness of the technology in the areas that we’re using it in.
You can’t make an AI without breaking a few eggs, many of which come in cartons of 13 and have an unexpected number of yolks.
The much-rumored and oft sought-after AI-powered better Siri even made a brief appearance, if just a bit of a cameo.
Speaking of which, if you want to have the original Siri do a Cameo for you, you can.
Also present
Turns out Apple did announce things that were not related to AI, if you can believe it. Not sure why they bothered, but they did.
The new version of macOS will be Sequoia and one of its big new features is, uh, your iPhone. A new feature of Continuity actually lets you remote control your iPhone right from your desktop. Apple has asked people not to then run Screens from their iPhones to then control the Mac as it could collapse the space/time continuum.
In terms of other platforms, visionOS also got some smaller updates and the iPad finally has a calculator, which has a number of cool new features, such as the ability to solve handwritten calculations. If you never thought you’d use algebra after graduating, you’ll at least use it to try this feature. And then probably never again.
“I’m pretty sure it’s one of these blue ones. Maybe this? Nope. This? Wrong again.”
Not great, Microsoft Bob!
But enough about Apple. How was Microsoft’s week?
Could have been better.
First the company had to walk back its recently announced Recall feature because it’s a security nightmare. Then ProPublica published a lengthy expose on the company’s lack of reaction to a security bug.
Well, how bad could it have been? Well, it allowed Russia to:
…vacuum up sensitive data from a number of federal agencies, including, ProPublica has learned, the National Nuclear Security Administration, which maintains the United States’ nuclear weapons stockpile…
OK, I’m not an expert on nuclear weapons (you’re thinking of my brother) but that seems bad.
To add insult to injury received by stepping on multiple rakes…
This change is probably less because of the Recall fiasco and the company dropping the nuclear football and more because Apple simply made AI announcements. Wall Street has signaled that Apple checked that box it wanted checked. Good job!
[John Moltz is a Six Colors contributor. You can find him on Mastodon at Mastodon.social/@moltz and he sells items with references you might get on Cotton Bureau.]
Neurodiverse users have also found value in the AVP. “I generally feel a lot better after having worn it for a while,” a user with autism and ADHD told me. “It’s like a reset for the brain.” When I chatted with them, they’d just drained their AVP’s battery by spacing out in the immersive lunar environment. “My brain just is hyperfocused on whatever stimulus comes in, so whatever I can do to manually cut those stimuli off helps me tremendously,” they said. “The Vision Pro is noise-canceling headphones for my eyes.”
Leland describes the joy a low-vision user finds with the headset, viewing windows the size of a garden shed, or not having to crane one’s to see a desk-mounted monitor. But he isn’t starry-eyed about Vision Pro, pointing out ways the people he profiled have struggled – sometimes a little, sometimes a lot – with the device. He was also treated to the Apple Park experience, and came away impressed by what he saw and heard from the accessibility team there.
Leland’s piece is most notable, though, for its thoughtful take on the nature of accessibility, and the way he contextualizes it for a wide audience without dumbing things down.
We recap WWDC until a truck harshly intervenes. Then John Moltz appears to save the day! [If you’re a More Colors or Backstage level member, this episode also contains our nearly hourlong monthly Q&A segment.]
As a native Safari extension, Magic Lasso blocks all intrusive ads, trackers and annoyances – letting you experience a faster, cleaner and more secure experience across all your devices. It’s easy to set up and easy to keep up to date. And Magic Lasso’s pro features enable you to block YouTube ads, craft your own custom rules and see ad blocking energy, carbon and data savings for any site.
It’s got over 5,000 five star reviews, which is pretty impressive. Download Magic Lasso Adblock today from the App Store, Mac App Store or via the Magic Lasso website.
It was clear that Apple didn’t want to be a part of this.
For years now, the company has resisted using the phrase Artificial Intelligence to describe what it does. It’s facile and misleading and not what Apple is really about at all. So it’s been working on advanced computing techniques and the hardware to drive them, all using phrases like machine learning and Neural Engine.
Then, a couple of years ago, the AI buzzword exploded into the consciousness of… if not the general public, then the tech industry and press. The image creation and chatbot demonstrations were, yes, facile and misleading, but also quite a party trick. The narrative shifted: Was this the future of computing?
And there was Apple, churning away on its ML photo tagging in Photos and eye tracking in visionOS and image segmentation for silly video effects, suddenly caught flatfooted. The area it had looked on with some disdain—and not without reason—was suddenly the next big thing in tech.
And, of course, the scary part was: What if it was the next big thing? What if—after investing billions in visionOS and a car project that never even saw the light of day after a decade, all as a hedge against anything that might eclipse the iPhone—the technology that it had pooh-pooed was the real existential threat?
According to multiple reports, there was a moment in late 2022 or early 2023 when Apple realized that it needed to aggressively embrace AI or risk being seen as old and out of touch—not just by its industry but by the general public. The result was a prioritization of AI features all around the company, and the first fruits of that prioritization were on display at Monday’s WWDC keynote.
Three kinds of AI
It seems to me that on Monday, we saw three different kinds of AI features. Apple won’t ever say which was which, but the rest of us can guess at it. The three categories are:
Features Apple would’ve done anyway. The company has been rolling machine-learning stuff into its software for ages now. Would smarter photo tagging and better Inbox organization in Mail have been announced regardless of Apple’s new, big AI focus? Almost certainly. Apple has always been about leveraging advanced technology in the service of customer needs.
Features Apple would’ve done… eventually. It sure seems like Apple’s shift of priorities led to some features being announced that seem very much like the sort of thing Apple would’ve done, but at a slower pace and more conservatively. There seem to be more WWDC announcements than usual that ask for our patience because they won’t be ready until later this year or even sometime next year. But if it was an AI-related feature and was on the drawing board, it probably got pulled forward for introduction at WWDC 2024. Ready or not, here they come.
Features Apple would never have done. There are some areas where Apple shows the strain of fearing they’ll be seen as burying their heads in the sand and resisting the trends. ChatGPT support and Image Playgrounds both feel like areas where Apple is stepping far out of its comfort zone in order to make it seem more tuned in to the zeitgeist.
We’ll see how these different features end up faring over the course of time.
The difficult choice
Last week, I wrote that Apple should see itself as the grown-up in the tech industry’s rush toward AI at any cost. For all of AI’s potential, it can be unreliable, the training data sources can be questionable, and it all just feels overhyped. (To me, AI manages to be simultaneously overhyped and potentially world-changing. Great potential… lots of snake oil. All at once.)
Apple’s WWDC announcements show that the company wrestles with this idea too. In its most thoughtful moments, it has expressed the desire to approach AI features with clear, important guidelines. The tools are meant to be intuitive, personalized, deeply integrated into Apple’s product experiences—they’re features, not tech demos—and build with privacy in mind.
In most cases I think Apple’s announcements showed those principles in action. This is a company that realized it was not going to be able to perform all the AI actions it wants to from a user’s device, so it built its own server hardware and operating system and is hosting them in its own data centers, all so that it can ensure that private user data isn’t ever misused. Nobody else does that, but Apple knew it had to.
Most of what Apple announced is really about solving problems for users. Rather than creating a model that has eaten the entire Internet but has a troubling tendency to hallucinate, Apple has focused on serving a user’s personal contexts. It seems to have avoided buying into the broader tech industry fallacy that the future of computers is asking questions to AI chatbots that respond with clichés and dissembling and hallucinatory answers.
Well. Avoided for the most part.
Apple has always been good at building features that play to its strengths, both technical and market wise. It owns your phone, it’s got your data in its ecosystem, it’s invested in building neural processing into its chips, and now it can put that all together to build a whole new Siri that knows your personal context and can put it into action.
But at the same time… all the cool kids are making weird AI art in Midjourney and Stable Diffusion, I guess? And even if Apple doesn’t have a chatbot that’s been trained on “world knowledge” (the official buzzphrase of WWDC 2024), I suppose you’re not in with the in crowd unless you’ve got a chatbot?
This is where Apple has compromised, a little, at a distance: Its Image Playgrounds feature makes weird AI-looking art, built around a nicer interface than any fill-in-the-text prompt, but still kind of weird and off-putting. And Apple has wired in ChatGPT, because if Siri can’t help you, maybe a chatbot can.
Warning labels and defaults
Which brings me to Apple’s ChatGPT integration, which seems utterly necessary from the standpoint of making the iPhone not feel hopelessly behind Google, which is rolling its Gemini chatbot into Android. People who despise AI hype in general and chatbots in particular will probably be disgusted by Apple’s choice, but if you look closer, Apple’s clearly not comfortable with any aspect of the partnership.
Think about it. Apple made a much-ballyhooed introduction of its partnership with ChatGPT by showing off that every single time Siri offers to use it; it throws up a (scary?) alert dialog that asks if you’re sure you want to use the chatbot. When the ChatGPT results return, they’re appended with a warning label advising that you check all the information if it’s important!
That was just in the keynote. Before the day was done on Monday, Apple execs had also made it clear that the feature was turned off by default and that ChatGPT was just the first partnership of many. Google Gemini was mentioned by name as if it was only a matter of time before ChatGPT’s archrival would be integrated. They likened it to offering users different, interchangeable search engines in Safari. They also suggested repeatedly that the ultimate goal of the “external chatbot” feature wasn’t really general knowledge at all but chatbots trained on specific areas of knowledge, like medicine.
Now Bloomberg’s Mark Gurman reports that Apple’s not even paying OpenAI for ChatGPT. I get why OpenAI would do this deal—its biggest competitor has the home-field advantage on the world’s only other notable mobile operating system—but it really looks like they’re the one over the barrel here, not Apple.
I think that might say it all. The conventional wisdom going into WWDC was that Apple was the company that was flailing and desperate, trying to catch up to the giants of artificial intelligence and retain some level of relevance. What we saw this week, however, was a company that seems surprisingly confident in its own AI prowess, and one that continues to largely follow its own playbook rather than compromising its ideals.
The hottest company in tech, OpenAI, just gave its crown jewels to Apple for free, and Apple responded by introducing its integration as optional and tagged with numerous warning labels. So, who’s behind and flailing, exactly?
The truth is that Apple right now is like a duck: serene on the surface but paddling furiously underneath. It was clearly complacent about the pace of AI innovation and allowed itself to get a bit too comfortable, and now it’s hurrying to keep up. It will undoubtedly make a few mistakes along the way. We also don’t know how good Apple’s stuff is—note that most of the Siri section in the WWDC keynote was aggressively in the future tense—and have to take the company’s word for how private its cloud services really are.
There’s a lot to interrogate here, and if Apple can’t fulfill its promises, it will be in real trouble. That’s a story that will be written over the next year, but right now, I wouldn’t bet against them.
We got together with Backstage pass members live on Zoom earlier today to discuss all sorts of stuff related to WWDC. We’ve embedded the video below, or you can watch it on YouTube.
One of the more contentious announcements from Apple this week is that it trained its foundation models used as the basis for its forthcoming Apple Intelligence features via, among other content, the open web.
Obviously that raised a lot of eyebrows among those of us that publish content to the web. Using copyrighted material to train AI falls under fair use is a question still being hotly debated, and one that may ultimately be highly dependent on the exact circumstances. It’s also a behavior that people feel justifiably uncomfortable with.
Setting aside feelings on that issue for just a moment, it’s worth looking at the mechanics behind this. Apple also said during its announcement that it’s providing a way for publishers to exclude their sites from being used for training its AI models, via a long established system built originally for search engines: robots.txt.
Robots.txt or not
If you’re not familiar with robots.txt, it’s a text file placed at the root of a web server that can give instructions about how automated web crawlers are allowed to interact with your site. This system enables publishers to not only entirely block their sites from crawlers, but also specify just parts of the sites to allow or disallow.1
Apple’s own Applebot web crawler has existed for some time; it’s used to power search features in Spotlight, Siri, and Safari. For example, whenever you see a Top Hit in Safari or Spotlight, that information is coming from a search index created using Applebot. Podcasts are also fed by Applebot, though in those cases only via specific URLs registered with Apple Podcasts.
If you’re worried that blocking Applebot from crawling your site might impact your site showing up in traditional search results, good news: the new AI training element of Applebot uses a separate identifier, allowing you to block only that functionality without affecting your site’s appearance in Apple’s search features.
How to stop Apple from using your content in AI training
Apple provides a detailed support document about Applebot and the various directives to control how it interacts with your site.
To specifically exclude your whole site from being used for Apple’s AI training features, you can add the following to your robots.txt file:
User-agent: Applebot-Extended
Disallow: /
To test this out, I’ve added those directives to my personal site. This turned out to be slightly more confusing, given that my site runs on WordPress2, which automatically generates a robots.txt file. Instead, you have to add the following snippet of code to your functions.php file by going to the administration interface and choosing Appearance > Theme File Editor and selecting functions.php from the sidebar. (You can also do this via a plugin like Code Snippets, which I use.)
If you want to go beyond Apple, this same general idea works for other AI crawling tools as well. For example, to block ChatGPT from crawling your site you would add a similarly formatted addition to the robots.txt file, but swapping in “GPTBot” instead of “Applebot-Extended.”3
One challenge with these AI tools, however, is whether or not the damage has already been done. Many of these AI models have, of course, already been trained, and it’s not as though you can remove training data from them by blocking these crawlers now—it’s very much closing the barn doors after the horses have gotten out.
However, it does mean that you can prevent your site and content from being used for training going forward, so any material you publish from that point on will be excluded. If you feel like that’s cold comfort, you’re not alone.
Update: An earlier version of this article misstated the user agent for ChatGPT’s crawler: it’s GPTBot, not ChatGPT-User.
It’s worth noting that robots.txt has no legal or technological basis for enforcement: it’s essentially a convention that companies have agreed to abide by. ↩
I used this method to block crawlers on the Six Colors WordPress instance.—Jason↩
To block both crawlers, you’d just add separate directives in the robots.txt for each of them. ↩
[Dan Moren is the East Coast Bureau Chief of Six Colors, as well as an author, podcaster, and two-time Jeopardy! champion. You can find him on Mastodon at @dmoren@zeppelin.flights or reach him by email at dan@sixcolors.com. His next novel, the sci-fi adventure Eternity's Tomb, will be released in November 2026.]
This week I got a chance to spend a little time with my pal Casey Liss, and he regaled me (and Stephen Hackett, since we are the three big college football fans in our larger tech nerd friend group) with the story of what he’s been up to this summer. It’s now a blog post:
During this off-season, I self-assigned a new project: The Tailgate Tub™.
If you are reading this site, you’ll probably enjoy this post—it’s all about a tech nerd with a problem to solve going above and beyond to solve it. I’m happy that my power outage that led to the purchase of an enormous battery inspired a small portion of Casey’s major tailgate upgrade.
I’m just back from Cupertino, and there’s an awful lot to think about. But before all that, I thought I’d cover what absolutely everyone is talking about: visionOS 2.
More seriously, I installed the visionOS 2 developer beta this morning—this is the entire reason I have a Vision Pro!—and I’ve got a few quick thoughts before plowing on to developer sessions, thoughts about Apple Intelligence, and the rest.
Spatial photos. The most impressive single feature I’ve seen in visionOS 2 so far is the ability to create spatial photos from out of your old, mundane 2D photos. Load pretty much any photo in the Photos app and tap the Spatial icon in the top left corner of the image, and a fun sound effect plays as an animation sweeps across your photo, representing the system’s machine learning-driven software scanning your photo and building an artificial depth map to provide the illusion of depth.
You know, very few movies are shot in stereo anymore. It’s more expensive and cumbersome to shoot, and conversion to 3-D after the fact is good enough. Upon viewing Apple’s spatial-converted photos in visionOS 2, I had similar thoughts: I don’t see why we would ever need to shoot stereo images if machine learning is this good at faking it.
Seriously, whether it was a photo taken last week or 50 years ago, Apple’s algorithm does a staggeringly good job at building a depth map. You don’t need embedded LiDAR or other depth information—the algorithm does it, and it does in incredibly well. Pictures of my kids taken when they were little are suddenly given more depth (literally and figuratively). Pictures of me as a kid, even. A complex shot with a tower made of LEGO bricks in the foreground was scanned and mapped perfectly.
It takes about 15 seconds to spatialize a photo, so it’s unlikely that Apple will ever ask the Vision Pro to churn in the background through every single photo in your library, but I’ve yet to see a photo that failed to become more interesting after being converted.
Quick access to status!
New gestures. Apple has added some new gestures to visionOS, which key off of you holding out your palm and looking at it. When you do that, a floating icon appears next to your hand indicating that you can tap your finger and thumb together to open the Home View. (There’s also a new Close button at the top of the Home View so you can close it again.) It’s a nice idea because as much as I’ve internalized reaching up and tapping the Digital Crown to bring up the Home View, the act of doing so is also jarring given that all other Vision Pro interactions are gestures happening in the air in front of me.
I’m actually more excited about the second gesture that keys off the first. After you look at your stretched palm (have you ever really looked at your hand?) you can flip your hand over to reveal a floating bubble that displays the time, battery percentage, and volume. Given how frustrating it was to quickly check the time in visionOS, this is a great new feature that I anticipate using a lot.
If you bring your fingers together while looking at the bubble, you can slide right or left to adjust the device volume quickly. And if you tap while looking at the bubble, Control Center opens. This, to me, is a much better way to access Control Center—though it’s certainly less discoverable than the little firefly that frequently hovers at the top of your vision.
Customize Home View. You can now move apps around in the Home View, and it works pretty much how you’d think: while looking at an app, bring your finger and thumb together to enter Jiggle Mode and then pinch the app to drag it around, even across pages. It worked exactly as I expected it to, and I’m happy to finally be able to put some of my favorite apps on page one.
If you’re a Magic Keyboard user, you’ll be able to see it now.
Breakthrough keyboards. Typing on a keyboard in an Environment was previously very weird, because while your hands were visible, your keyboard itself was not. Apple has upgraded this in visionOS 2 so that it recognizes either the Apple Magic Keyboard or the keyboard on a MacBook, and that’s great.
But at least in the first developer beta, I found that it was a bit finicky—it needed the keyboard to be positioned just-so for it to appear, and even then it sometimes felt like the environment was eating away at the edges of the keyboard. I’m not even a hunt-and-peck typist, but being able to actually orient on the keyboard by seeing it is still valuable.
Also, I’m disappointed that Apple has limited this feature to its own keyboards. I realize it would be harder to generate a model that recognizes more generic keyboards, but most keyboards really do have some pretty obvious characteristics in common, don’t they? Alternately, maybe Apple should consider a feature like the one Meta offers, which lets you work in an environment but with a specific cut-out—like a tabletop—set to pass through.
Life’s a beach. Famously, visionOS shipped with two Environments marked as “Coming Soon”—one featuring a blurry beach image and another with fog-shrouded trees. The trees remain a mystery—Twin Peaks environment?—but the beach has arrived in the form of Bora Bora. As someone who loves beaches, it is spectacular. You can hear the sound of the waves, the palm trees gently blow in the breeze, and you can even see the color of the beach change as a thin cloud passes over the sun. I think I am going to be spending a lot of time in Bora Bora (the visionOS 2 environment, alas).
I didn’t get a chance to test out some other features due later this year, including support for a much larger display in Mac Virtual Display mode and multi-view support in the Apple TV app. In a Spatial Persona call with my podcast co-host Myke Hurley, the updated personas looked good (most notably hand gestures) but Myke’s mustache still prevented his persona’s mouth from moving. There’s more work to be done there.
Still, it’s exciting to get some major new features out of visionOS 2, despite the fact that visionOS 1.0 only shipped a few months ago.
Our favorite features of Apple Intelligence, the biggest missed opportunity at WWDC 2024, our most anticipated iOS 18 feature announced, and our reactions to the lackluster iPadOS announcements.
As long as your host operating system is macOS 15 or newer and your guest operating system is macOS 15 or newer, VMs will now be able to sign into and use iCloud and other Apple ID-related services just as they would when running directly on the hardware.
This is a complaint I’ve heard from more than a few developers of my acquaintance and should help improve testing and other processes. As someone who usually installs a beta release of macOS at some point during the cycle (albeit not in VM), it’s clear how much iCloud integration is a part of the whole OS experience, and the previous prohibition is a big limitation in terms of testing real world usage.
However, there are still some restrictions in place: the feature is only supported if both the host and virtualized OS is Sequoia or later.
Apple, as a company, has always extolled the value of putting the “personal” in “personal computer.” From its earliest days pushing back at the monolith of IBM and beige boxes that all looked like one another to its more recent extremely personal devices like the iPhone, Apple Watch, and AirPods.
But that ethos of “personal” technology has always been in fundamental tension with the company’s other overriding principle: Apple knows best. Whether it’s the design of its apps or how to use its features, the company has a strong streak imposing what it believes is the best approach on its users.
In the company’s latest platform updates, this tension is more apparent than ever. Apple announced several new features that allow users to bring their own touches to their devices—but it did so in a typically Apple fashion that still kept everything within bounds.
Live from Cupertino, Jason has his in-person reactions to Apple’s big WWDC announcements. And in London, Myke processes his feelings about some controversial Apple choices.