My thanks to Unite Pro for sponsoring Six Colors this week.
Safari web apps and PWAs are a nice start, but they’re limited. Browser tabs are messy. And most tools for turning websites into apps still feel more like wrappers than real Mac software.
Unite Pro takes a different approach. It turns any website into a fast, isolated Mac app built specifically for macOS — with support for Window, Sidebar, and Menu Bar modes, deep visual customization, smart link forwarding, and native enhancements like dock badges, meeting alerts for Google Calendar and Outlook, AI overlays for ChatGPT, Gemini, Grok, and Claude, and more.
What makes Unite Pro special is how much control it gives you. You can remove distractions, force dark mode on sites that don’t natively support it, apply custom scripts and styles, and shape each app around the way you actually work — while keeping sessions, cookies, and permissions separate from your browser.
Six Colors readers can get 20% off Unite Pro this week with the code SIXCOLORS. Learn more and download at bzgapps.com/unite.
It’s the end of an era: Apple has confirmed to 9to5Mac that the Mac Pro is being discontinued. It has been removed from Apple’s website as of Thursday afternoon. The “buy” page on Apple’s website for the Mac Pro now redirects to the Mac’s homepage, where all references have been removed.
Apple has also confirmed to 9to5Mac that it has no plans to offer future Mac Pro hardware.
A quiet end to what was once the flagship of the Mac product line. But time comes for us all.
Over the years, as laptops rose in prominence and other Mac desktops added power, the Mac Pro increasingly became a niche, high-end device. After the disastrous trash-can Mac Pro design, Apple made good on a promise to return the Mac Pro, and shipped a new take on the “cheese grater” enclosure. But the move to Apple silicon really killed the product dead, since Apple’s modern chip architecture doesn’t support external GPUs, which was one of the last reasons to buy a Mac Pro.
In the interim, the Mac Studio has become the top-of-the-line desktop. It’s great. RIP to a real one, but it’s time for us all to move on.
Public spaces like Cosm might be a good content fit with Vision Pro.
The Apple Vision Pro feels like a product that’s waiting for the world to catch up, but the reality is closer to the opposite. The world is waiting for a reason to use it and that reason hasn’t quite shown up yet.
There’s very little wrong with the hardware. Apple built something that works in a way first-generation devices rarely do (says the guy old enough to have bought a Newton at launch) with displays that feel natural rather than novel and an interface that disappears quickly enough to let you focus on what you’re seeing.
The problem comes the moment you take it off. There isn’t a strong pull to put it back on. It’s impressive, even remarkable in bursts, but it doesn’t yet fit into a daily rhythm. That’s not a hardware problem. It’s a content problem, and more specifically, a cadence problem. Apple has treated immersive content like a prestige release schedule, carefully curated and spaced out, which works for television but not for behavior. If you want people to build a habit around something, you need volume and consistency, not occasional brilliance. Right now, Vision Pro feels like something you check in on rather than something you live inside, and that distinction matters more than anything on the spec sheet.
Neal Stephenson’s skepticism lands because it recognizes that gap. If the content never reaches a point where it becomes necessary, the headset remains optional, and optional devices rarely scale. What’s interesting is that the missing piece isn’t hypothetical. It already exists in a different form, outside of Apple’s ecosystem, and it’s showing up in a place that Apple understands better than most companies: people paying for experience.
Cosm is the cleanest example of that. It’s easy to dismiss it a high-end gimmick, a giant dome with a better screen, but that misses what’s actually happening inside those venues. People are buying tickets, planning nights around it, treating it as something closer to attending a game than watching one. The technology matters, but the behavior matters more.
Cosm is already generating meaningful revenue and drawing repeat customers, which tells you this isn’t just novelty value. It’s tapping into something real, the idea that proximity, or at least the feeling of it, has value even when the event is happening somewhere else.
The challenge for Cosm is that scaling that experience is difficult. These are expensive builds that require the right locations, the right partnerships, and enough capital to expand without diluting the quality that makes them work in the first place.
That is exactly the kind of problem Apple has solved before. It’s not just about having the cash, though Apple certainly has that. It’s about having the discipline to build a system that can expand without losing its identity and the distribution to make it visible at scale. If Apple owned something like Cosm, it wouldn’t just be a set of venues. It would be a front door. You could put an Apple Store in the lobby and it wouldn’t feel forced. It would feel like a natural extension of the experience, a place where people encounter the hardware in the context of something they already understand.
From there, the path to the home becomes clearer. Vision Pro, or whatever lower-cost version follows, doesn’t need to stand on its own as a category. It becomes an extension of something people have already bought into. The idea of watching a game “from somewhere else” is no longer abstract because they’ve already felt it in a room with other people. At home, it becomes a different version of the same experience, missing the crowd and the waiter, but gaining convenience and access.
The critical shift is in how Apple approaches rights. Trying to own sports outright is a losing strategy. The costs are too high, the competition too entrenched, and the fragmentation too deep. Apple has made smart moves with MLS, F1, and selective partnerships, but doubling down on exclusivity won’t unlock this. The better path is to work alongside the existing ecosystem. Install Cosm camera systems at major events, not as replacements for the broadcast but as an additional layer. Let networks and leagues sell that immersive feed as a premium product, with Apple taking a share for the technology and distribution. It’s additive rather than competitive, which makes it easier to scale and harder for partners to resist.
Apple has always been at its best when it connects behavior to technology in a way that feels inevitable in hindsight. Right now, Vision Pro still feels like a solution looking for a problem. The problem, or more accurately the opportunity, is already there in how people respond to immersive sports experiences. Cosm has shown that people will pay for that feeling. The hardware is close enough to deliver it at home. The gap is building the bridge between those two things in a way that feels continuous rather than experimental.
If Apple gets that right, the conversation around Vision Pro changes quickly. It stops being about whether people want to wear a headset and starts being about what they’re missing when they don’t. That’s the point where adoption tends to take care of itself.
[Will Carroll was an early writer at Baseball Prospectus who covers injuries at Under the Knife and talks about them on Injury Territory. He frequently co-hosts Downstream with Jason Snell on Relay.]
Harry McCracken has put together an amazing oral history of Apple’s earliest days. You should read the whole thing, but this anecdote from Chris Espinosa, who still works at Apple after all these years, is the part that made me laugh the most:
I was sitting there in the Byte Shop in Palo Alto on an Apple-1 writing BASIC programs, and this guy with a scraggly beard and no shoes came in and looked at me and conducted what I later understood to be the standard interview, which was “Who are you?” I said, “I’m Chris.” … Steve Jobs’s idea back then of recruiting was to grab a random-ass 14-year-old off the streets.
The fifth season of Apple TV’s “For All Mankind” premieres March 27—really, the evening of March 26 for those of us on the West Coast. For the last few years, Dan and I have been reviewing episodes on our “NASA Vending Machine” podcast and I’m excited to have the show back.
As always, “For All Mankind” is about taking big swings. There’s always a dramatic, history-changing moment or shocking twist that’s not too far away. Set in an alternative past where the Space Race kept going after the Soviets landed on the moon (yep!), season four took us to a 2003 where Mars colonists sought more autonomy by hijacking an asteroid.
This season, which takes place in 2012, is still primarily set on Mars, though there’s also some space adventure in the offing. Apple tech fans will enjoy that we’ve finally reached the iPhone era, though the iPhones on “For All Mankind” are a little thicker than the ones we remember, and they might actually be Newtons. There are also a lot of early-2010s iMacs on display.
While the first episode has to do a lot of work reminding you of what’s happened recently and setting up the new power dynamics at play this season, subsequent episodes get pretty intense, pretty fast. At times the show plays with police procedural, mystery story, even car-chase adventure… familiar TV genre stuff, except it’s all on Mars! Mireille Enos of “The Killing” plays an important new role as an investigator for the Mars Peacekeeping force who is suspicious that several different crimes might have been committed out on the surface. There are also a bunch of returning faces, some expected and some quite surprising. (And also, yes, Joel Kinnaman is still in the show even though Ed is now basically in his eighties.)
I’ve seen the first six episodes thus far, so I don’t know where it’s all going, but I’ve sure enjoyed the ride. “For All Mankind” continues to use its alt-history setting to tell dramatic, almost operatic stories that can also disturbingly have relevance to current events in our own world.
Claude Code takes advantage of a real development environment.
I’m pretty skeptical that Apple’s new Siri-wrapped Gemini will be able to accurately and reliably assist with automation. Gemini will be the foundation to Apple’s foundation models, but there’s no there there. Apple has no well-documented, debuggable, inspectable system to execute automation with, unless you count ancient and inscrutable AppleScript, and you shouldn’t.
Sure, LLM chatbots will spit out code (even AppleScript!) if you ask them to, but it might not work. It gets substantially worse when you’re asking LLMs questions about Shortcuts.
Go ahead and ask any chatbot to describe how to make a Shortcut to perform some automation that you’ve been wanting to do and then try to assemble what it suggests. It’s extremely tedious, prone to user error, and isn’t in any way guaranteed to work even when it’s all put together.
Agents that hook into development environments are much better than a bare chatbot because they can inspect, run, and debug the code they are generating. They aren’t perfect, but if you have an agent like Claude Code hooked up to an development tool like VS Code and start describing some Python script you want, it’ll execute and iterate until the output is what you asked for.
If humans don’t have access to documentation, to actionable debug output, logging, the ability to bypass/ignore actions as part of testing, and the ability to copy and paste snippets of code, then how can the new Siri do it?
Right now, Shortcuts works with AI models by passing some input and then receiving the output. When something goes to the model, the model transforms the data, and delivers a result back to Shortcuts. That’s a non-deterministic workflow, so any change to the model, or even just randomness in general, can produce different output. This means you can’t reliably troubleshoot or adjust it without introducing uncertainty in what new outputs you’ll get.
When working with an agent to assemble automation in an IDE, the code it builds is deterministic, so it will keep working even if the model changes. Not everything you want to automate requires LLM functionality when it runs, but not everything you automate should require hours of labor to fabricate the deterministic workflow version of it.
I really hope that the magic of new Siri isn’t going to be that it will just do things with bare actions and App Intents, magically, without any user-accessible process, or as a blob inside of a Shortcut you need to make. If I ask Siri to reorder a list, and it doesn’t do it correctly, I want to be able to access the scaffolding it created to see what went wrong, not just keep asking Siri to do it again in slightly different ways until I get output I like.
If Siri doesn’t produce anything inspectable, or it produces a Shortcut, then there’s not much work humans or AI can do to fix things.
AI cut below the rest
The problem the Shortcuts app is supposed to solve has never been solved, because no one really knows how to use Shortcuts unless they become a Shortcuts expert. Shortcuts is user-friendly in appearance, but not in practice. It’s meant to welcome people who don’t know anything about programming with its friendly drag-and-drop interface, and searchable actions panel.
Unfortunately, the names for actions don’t always say what they do, and the documentation is often a vague piece of filler that’s frequently reused for more than one Shortcut action. Even experienced programmers can get flummoxed when they try to search the available actions for seemingly standard functions, like reversing a list.
Magic connections are magic, until your script gets any longer than the length of your screen and you need to start dragging actions around, inevitably breaking connections and making unintended ones. With a text-based script you’d have to keep track of the names and spelling of your variables, but they don’t change out from under you if you add more lines of code above or below them.
You can’t do one of the most simple, and useful things in scripting, which is commenting out (ignoring/bypassing) something to test or evaluate alternatives.
A lot of the time, when people are using Shortcuts, they’re relying heavily on the run shell script action to do actual programming that lets them write normal, vanilla code, or ssh’ing into a server from iOS to do the same thing. It’s nice that Shortcuts can do that, but shell scripts aren’t cross platform, and ssh’ing into a server is in no way accomplishing Shortcuts’ mission.
Without logging, you can’t ask Siri why your automation that was supposed to run in the middle of the night didn’t run. Maybe it was a permissions issue that was never raised when the shortcut was created. You, and Siri, just don’t know.
AI rising tide lifts all boats
Again, Apple doesn’t have to do these things just for humans, or just for Siri. They are in no way mutually exclusive.
If the concern is that Shortcuts shouldn’t be like a programming language, with tracebacks, and logs which would put off “normal people” then just remember that “normal people” don’t really use Shortcuts. They ask a chatbot to just do it, and Siri, as Apple’s chatbot, could take advantage of those fiddly, programming bits and perform its role better, in a way that was auditable.
I have seen people make frantic posts on Mastodon about how AI is deskilling programmers, but the beauty of Shortcuts is that Apple already applies the deskilling at the factory.
[Joe Rosensteel is a VFX artist and writer based in Los Angeles.]
Sounds bad. But if they’re not recalling the routers, and they’re not fixing them… what the heck is the government actually doing?
It’s banning future routers that haven’t been made yet.
You’re not making a lot of sense.
I warned you this was a story about Brendan Carr, known dummy and anti-consumer FCC chairperson! Specifically, the FCC is keeping new, previously unannounced, foreign-made consumer routers out of the US… unless it decides to exempt them. For reasons. We’ll get to those.
Hollister classifies this as a shakedown to somehow force more manufacturing jobs back to the U.S. Gee, I wonder if there could possibly be any…let’s say exploitable…loopholes to this brilliantly concocted plan:
What if I buy one of those newer routers in Canada and bring it back home?
The FCC’s magic 8 ball says, “no,” but good luck enforcing that, Brendan.
Apple today announced Apple Business, a new all-in-one platform that includes key services companies need to effortlessly manage devices, reach more customers, equip team members with essential apps and tools, and get support from experts to run and grow efficiently and securely. Apple Business features built-in mobile device management, helping businesses easily configure employee groups, device settings, security, and apps with Blueprints to quickly get started. In addition, customers can now set up business email, calendar, and directory services with their own domain name for seamless and elevated communication and collaboration.
This new offering actually consolidates three existing Apple products, Apple Business Manager, Apple Business Essentials, and Apple Business Connect, and offers mobile device management for free, which will save some existing customers money. There are also some new API functions for larger organizations, and Apple is offering businesses access to Apple-hosted email and calendaring for the first time. The new Blueprints feature will make it easier for administrators to assign configurations and apps.
Also announced today is something that has been widely expected: ads in Maps in the U.S. and Canada. We now know those will arrive this summer. Apple provides additional details further on:
Ads on Maps will appear when users search in Maps, and can appear at the top of a user’s search results based on relevance, as well as at the top of a new Suggested Places experience in Maps, which will display recommendations based on what’s trending nearby, the user’s recent searches, and more. Ads will be clearly marked to ensure transparency for Maps users.
Again, it’s not a huge surprise to see this—Apple has been working on bolstering its ad business in the past few months. But it does mean that once this feature is enabled, you’ll have to scroll past an ad to see results when you search for stuff in Apple Maps.
Ads or no, companies that use Apple Business will also be able to edit their metadata and upload pictures directly into Apple Maps.
It’ll take some time to digest these changes, but it seems like this is a simplification of Apple’s business offering, and making MDM free will be a win for smaller organizations. Unfortunately, Apple’s still only offering 5GB of free iCloud data on managed accounts, and it’s hard to think that any business should rely on Apple’s notoriously unreliable email platform.
From Eric Berger at Ars Technica, the first of a three-part series about orbital data centers. This first part focuses largely on economics, but also touches upon issues of the environment, the obliteration of the night sky, and more. It’s a really fascinating read.
“This is not physically impossible; it’s only a question of whether this is a rational thing to scale up economically,” [engineer Andrew] McCalip said. “The answer is it’s really close. And if you own both sides of the equation, SpaceX and xAI, it’s not a terrible place to be. I wouldn’t bet against Elon.”
Yet betting on Elon also requires a giant leap of faith.
The third part of this series will dive deeper into detailed cost estimates, but in terms of round numbers, the bare-bones cost of deploying 1 million satellites is more than a trillion dollars. SpaceX’s two biggest previous projects to date, the hyper-ambitious Starlink and Starship programs, each required on the order of $10 billion up front. So in terms of scope and cost, orbital data centers are two orders of magnitude larger.
The part that has me curious, but isn’t really addressed in the story, is future-proofing. Companies like Nvidia are kicking out new chips at such a rate that the processors you send in to orbit will almost certainly be outdated by the time they’re operational. Will that be enough to offset the perceived gains? Are we constantly going to be launching new satellites? What happens to the old one? What if the AI bubble bursts? All fertile ground for a near-future sci-fi story, methinks, if not near-future non-fiction.
Collect your meager Kalshi and Polymarket winnings, I guess: Apple has officially announced that its 2026 Worldwide Developers Conference will kick off with an in-person event at Apple Park on Monday, June 8.
As in recent years, the event will run for the week, starting with the Keynote and Platforms State of the Union on Monday, followed by video sessions and labs. Videos will be accessible on the Apple Developer app and website, as well as YouTube.
Those who attend the in-person event will be able to watch the Keynote and state of the union, as well as participate in other activities throughout the day. Attendees will be determined by random selection, with invitations sent out by the end of the day on April 2. You must be a member of the Apple Developer Program or Apple Developer Enterprise Program, or a winner of the 2026 Swift Student Challenge to apply. Additionally, 50 “Distinguished Winners” of the challenge will be invited to a three-day experience that includes Monday’s special event.
There are, as usual, lots of eyes on WWDC, where Apple traditionally announces all its big platform updates for the year to come. This year we’re expecting the “27” year updates, but there are big questions marks hovering around some features, like the much-delayed Apple Intelligence offerings. We’ll find out that and more on June 8.
I started seeing pictures in Photos that I was sure I didn’t take recently, but often of locations I knew vaguely. After a few days, I realized the issue: my older child, in college on the East Coast, and currently traveling during spring break, had a Camera setting that caused all their images to blend into a shared family library.
Fortunately, our kid is old enough, wise enough—and communicative enough with us parents—that I didn’t see anything they didn’t want me to see, but it is the kind of thing that could be awkward in some shared groups.
The issue is that the Camera app has a tiny icon that’s easy to tap and activate without realizing it. That button’s activation is then persistent for all subsequent photos you take!
Here’s how to work with this feature intentionally, and deactivate it if you never want to use it—with purpose or not!
Share with those who care
The iCloud Shared Photo Library is an oddball: you can create or belong to one, and one only, and it can be shared with the creator plus five others, who don’t need to be in your Family Sharing group, if you have one.1 The thing that you create is called Shared Library throughout the interface.
Once you’ve created or joined a Shared Library, its availability appears in Photos on your devices with iCloud Photos enabled and, more subtly, in Camera. If you’re viewing both libraries in Photos, images from the Shared Library have the Shared Library two-person icon overlaid in the upper-right corner; videos, for some reason, do not.
A pop-up menu appears in Photos for Mac that lets you choose whether to see your Personal Library, Shared Library, or both.
On a Mac, Photos: Settings: Shared Library reveals participants and offers a Shared Library Suggestions checkbox. Enable this if you’d like your Mac to say, “Hey, maybe you should add this image to the Shared Library!” (I didn’t find this particularly useful, and disabled it.)
Go to Settings: Apps: Photos, and you’ll note an extra option on iPhones and iPads: Sharing from Camera. That’s the culprit in my offspring’s openness.
Keep Camera shots your own (or not)
You can tap Sharing from Camera or go directly to Settings > Camera: Shared Library: Sharing from Camera. With Sharing from Camera enabled, you see a yellow icon of two people side-by-side in the upper-left corner of the screen in portrait mode or the lower-left corner in landscape. Tap it to disable directly from Camera. A yellow label appears at the top of the Camera interface to indicate which library is in effect after you tap the button. When Sharing from Camera isn’t turned on, the icon appears with a line through it.
When you activate the two-person icon, the Shared Library label appears (highlighted here with an added red box).
The setting is persistent within Camera, so each time you open Camera, your previous Shared Library choice remains. You can override this via Settings or by tapping the icon again.
If you never want to enable Shared Library by accident or intentionally, disable Sharing from Camera in the Photos: Sharing from Camera settings.2
If you found you put media in the Shared Library and want to return them to your own, you can fix this quite easily:
On a Mac, select the items in Photos, and choose Image: Move X Photo(s)/Video(s) to Personal Library. You can also Control/right-click on any item in the selection.
On an iPhone or iPad, go to the Library view in Photos, tap Select, and choose one or more items. Tap the More … icon at upper-right, and choose Move to Personal Library.
In a bit of turnabout, a few days after writing this column, I get a text from my youth: “Your photos seem to be going into my account. I think you pressed the share button in the app by accident.” D’oh!
For further reading
Our very own leader, Jason Snell, has a book that covers Shared Library and much more. Pick up a copy of Take Control of Photos to get up to speed on this and other quirky Photos features.
[Got a question for the column? You can email glenn@sixcolors.com or use/glennin our subscriber-only Discord community.]
Shared Library requires iOS 16.1 or iPadOS 16. or later, or macOS 10.13 Ventura or later. iCloud Photos must be enabled on each participating device. People under 13 can only join (or create) a Shared Library with Family Group members. ↩
There’s a Share Automatically option that puts photos you take with Camera in the Shared Library whenever participants in the Shared Library are recognized while taking the picture. ↩
Because this is a Mac, it can do effectively anything you want to throw at it. I have been editing video on it and doing web and Xcode development with very little issue. Yes, my Xcode builds take longer than they do on my M4 Pro Mac, and rendering video runs slower too, but it can do it. If this is the computer your budget allows, don’t think that you can’t.
However, you’ll notice I’m not including any benchmarks in this review. I don’t think benchmarks properly express the experience of using a computer, particularly on the lower end of products. On the high end, when everything basic is child’s play, benchmarks can be helpful to understand differences at the margins. But one of the charts I’ve been seeing go around since this was announced was Geekbench single-core performance, which showed the Neo as effectively as fast as an M4. Let me tell you, if you’re using that chart to understand the performance of this computer, you are being misled.
You certainly are, if your definition of ‘use’ is Xcode builds and video editing. Matt is not technically wrong when he suggests later in his post that, in workflows requiring lots of RAM and speedy disk access, even the oldest Apple silicon MacBook Air will perform a bit better than the MacBook Neo.
When performance is in the ballpark of a five-year-old computer, you have to consider what this machine costs compared to refurbished models from that era…. I can get a refurbished M2 MacBook Air with 512GB storage and 16GB RAM for $50 less than the maxed out MacBook Neo. That M2 Air gets you significantly better performance, a better screen, a better trackpad, better USB-C ports, MagSafe, and faster SSD speeds. I feel very confident saying that the M2 Air is a meaningfully better computer the Neo.
Again, Matt’s not wrong. If you are a savvy shopper with a very limited budget and need that push beyond what the MacBook Neo can give you, you are probably better off searching for a used or refurbished MacBook Air. Last week on MacBreak Weekly, Christina Warren made a strong case for picking up a refurbished or used M3 or M4 MacBook Air, which would cost slightly more than a MacBook Neo but far outclass it in terms of features.
But this entire conversation misses the most important thing about the MacBook Neo: It is sold in every Apple Store, on Apple’s website, and in every Apple sales channel. Most people won’t think to cruise for a refurbished Air—they will just go down to their local store, or pop onto Amazon, and shop for a computer. That’s why the MacBook Neo is important. It’s available to everyone, everywhere, and Apple will stand behind it as a new product.
Think about it. Apple has covered the pro market with the MacBook Pro lineup. The Neo is about to cover the mainstream and budget-conscious buyer.
But there’s a gap at the top. A premium ultralight for people who travel constantly, who want the absolute minimum weight and footprint, and who are willing to pay for it. A MacBook that weighs two pounds or less, with a stunning display and all-day battery life. Not a compromise machine. A showcase.
I would question the premise that you can get “the absolute minimum weight” along with “all-day battery life” (depending on how you define that)—but I do not doubt that Apple could create a laptop with M-series performance and good enough battery life, but with an emphasis on compactness.
But is there enough of a market for a fourth class of MacBook? As someone who has known and loved the 12-inch PowerBook, 11-inch MacBook Air, and even the 12-inch MacBook, I am sadly not convinced that this is a big enough segment for Apple to target when the MacBook Air exists.
And here’s the biggest reason I think a smaller laptop may never happen: Over the last decade, everything in macOS has gotten a bit bigger—not just OS elements, but even fundamentals of app design. When I was still using an 11-inch Air, I would often discover apps that couldn’t be resized to fit on my screen. The same happened with the retina MacBook. I’m afraid that the 13-inch display in the MacBook is probably as small as modern macOS and today’s Apple will reasonably go.
Apple announces new AirPods, Tim does an interview, and it’s about App Store standards, people!
It’s the least they could do
Apparently not big enough to make Apple’s “experience” two weeks ago, this week the company announced AirPods Max 2 with the H2 chip. This updated version comes at the same price and in the same colors but with Adaptive Audio, Conversation Awareness, and Live Translation, along with some other small enhancements.