Six Colors
Six Colors

Apple, technology, and other stuff

Support this Site

Become a Six Colors member to read exclusive posts, get our weekly podcast, join our community, and more!

By Jason Snell

LLMs aren’t always bad at writing news headlines

A collage of news headlines rewritten by ChatGPT

On Monday I complained about Apple’s response to Apple Intelligence making mistaken summaries of news headlines. But here’s the funny thing: large-language models are actually pretty good at writing news headlines.

The problem with Apple’s approach is that it’s summarizing a headline, which is itself a summary of an article written by a human being. As someone who has written and rewritten thousands of headlines, I can reveal that human headline writers are flawed, some headlines are just not very good, and that external forces can lead to very bad headlines becoming the standard.

Specifically, clickbait headlines are very bad, and an entire generation of headline writers has been trained to generate teaser headlines that purposefully withhold information in order to get that click. (Writing a push notification is not the same as writing a headline, but it’s at least similar.)

Not to get all Fred Jones on you, but: I was trained to write headlines that made you want to read more, but didn’t withhold information. The idea was to compete for your attention, not pose riddles that could only be decoded by reading the story in question.

The now-dead news app Artifact built a killer feature that rewrote clickbait headlines on demand. It used the complete content of the news story to write a new headline that was always better than the ones being served up by various news organizations. Yes, when given enough information and asked to generate a headline, it turns out that AI is pretty good at the job!

Still, as Mike Krieger of Artifact wrote about in a Medium post, the company also built in some layers of human validation:

When we rewrite a title given a user request, that new title is initially only visible to the user. If enough people request a rewrite of a given title, the rewritten title will be escalated for human review and if it looks good, we’ll promote it the new default title for all users.

One of the principles we use when applying AI to features inside Artifact is to be as transparent as possible. For both Summarize and the clickbait title rewriter, we’ve used consistent iconography (the star-like shape) to denote that there’s AI involvement.

It didn’t work out for Artifact, but there’s something here. Summarizing summaries isn’t working out for Apple, but more broadly I think there’s something to the idea of presenting AI-written headlines and summaries in order to provide utility to the user. As having an LLM running all the time on our devices becomes commonplace, I would love to see RSS readers (for example) that are capable of rewriting bad headlines and creating solid summaries. The key—as Artifact learned—is to build guardrails and always make it clear that the content is being generated by an LLM, not a human.


By Jason Snell

Apple Intelligence summaries might get warning labels. That’s not enough.

erroneous notification summaries

The BBC, following up on two reports of Apple Intelligence summaries that transformed its own headlines into factually inaccurate text, got a public response from Apple:

Apple has said it will update, rather than pause, a new artificial intelligence (AI) feature that has generated inaccurate news alerts on its latest iPhones.

The company, in its first acknowledgement of the concerns, on Monday said it was working on a software change to “further clarify” when the notifications are summaries that have been generated by the Apple Intelligence system.

Here’s what the BBC reports Apple’s statement is:

Apple Intelligence features are in beta and we are continuously making improvements with the help of user feedback. A software update in the coming weeks will further clarify when the text being displayed is summarization provided by Apple Intelligence. We encourage users to report a concern if they view an unexpected notification summary.

The statement uses the beta tag it has placed on Apple Intelligence features as a shield, while promising to add a warning label to AI-generated summaries in the future. It’s hard to accept “it’s in beta” as an excuse when the features have shipped in non-beta software releases that are heavily marketed to the public as selling points of Apple’s latest hardware. Adding a warning label also does not change the fact that Apple has released a feature that at its core consumes information and replaces it with misinformation at a troubling rate.

Apple is shipping these AI-based features rapidly, and marketing them heavily, because it fears that its competitors so far out in front that it’s a potentially existential issue. But several of these features simply aren’t up to Apple’s quality standards, and I worry that we’ve all become so inured to AI hallucinations and screw-ups that we’re willing to accept them.

We shouldn’t be. Apple’s shipping a feature that frequently rewrites headlines to be wrong. That’s a failure, and it shouldn’t be shrugged off as being the nature of OS features in the 2020s.

So what can Apple do now? A non-apology and the promise of a warning label isn’t enough. The company should either give all apps the option of opting out of AI summaries, or offer an opt-out to the developers of specific classes of apps (like news apps). Next, it should probably build separate pathways for notifications of related content (a bunch of emails or chat messages in a thread) versus unrelated content (BBC headlines, podcast episode descriptions) and change how the unrelated content is summarized. Perhaps a little further down the road, news notifications should be summarized based on the full text of the news article, rather than generating a secondhand machine summary of a story already summarized by a human headline writer.

Beta software contains an implicit promise that the developer will actively work to squash bugs and make the product better before it goes final. Adding a warning label in the interim is an easy band-aid, but it doesn’t address the underlying problem. Apple needs to do much more work here, and if it can’t, it needs to turn this feature off until it can release a version it can stand behind.


By Jason Snell for Macworld

25 years on, Mac OS X continues to be Apple’s standard bearer

2000 apple monitor with OS X

Twenty-five years ago, Steve Jobs took the stage at Macworld Expo in San Francisco and unveiled Mac OS X, ushering in a new era for the Mac and the world of desktop computing at large.

That sounds like hyperbole, but after watching the keynote for a second time—the first time was from the front row, thank you very much!—it’s remarkable what an enormous moment this was for Apple and the Mac.

It’s funny. What’s remarkable about the moment is actually how uneventful it seems. When I watch the video back, it’s almost surreal how Steve Jobs keeps doing utterly normal, boring things in Mac OS X while the crowd completely loses its collective mind. Viewed by someone without any historical context, it would seem like a cult being whipped into a frenzy by its leader.

But I was there, and I can tell you that it wasn’t that. This was the moment, after sixteen years of classic Mac OS—and let’s face it, the last five of those were pretty rough—when all the failings of the Mac were swept away and replaced with something modern, ready for the challenge of the 21st Century.

How did that work out for Apple? The keynote seems so weird now because almost everything in it is just how the Mac works, even 25 years later. Yes, interface styles have changed over time, but that moment on stage in January 2000 redefined the Mac for 25 years and counting.


25 years since the Dock’s debut

The Dock!

Sunday marks the 25th anniversary of the unveiling of Mac OS X, and yeah, I wrote the cover story for Macworld. While I was working on a forthcoming piece celebrating that anniversary, I asked my friend James Thomson if he had a good link about his time working on the Dock, which was unveiled as a part of that event.

He said he didn’t, but was apparently inspired enough to write one for me to link to:

The version [Steve] showed was quite different to what actually ended up shipping, with square boxes around the icons, and an actual “Dock” folder in your user’s home folder that contained aliases to the items stored.

I should know – I had spent the previous 18 months or so as the main engineer working away on it. At that very moment, I was watching from a cubicle in Apple Cork, in Ireland. For the second time in my short Apple career, I said a quiet prayer to the gods of demos, hoping that things didn’t break. For context, I was in my twenties at this point and scared witless.

The timeline is interesting. James wrote his classic Mac utility DragThing before working at Apple, then was hired by Apple, then ended up working on the Dock, and then left Apple… to resume working on DragThing.

Also: James’s story about Apple trying to hide James’s location from Steve Jobs is an all-time classic.


More Apple Intelligence, and whither Vision Pro

The year of Apple Intelligence comes around (again), and Apple needs to decide where the Vision Pro goes next. [Post-show: The long way to Passkeys and magic links.]



By Jason Snell

Quick Tip: Which USB devices are currently attached?

A shortcut showing the command below being used in the Shortcuts app.

I’ve recently switched from using two computers in two different offices (the hard-to-heat garage and my well-heated back bedroom) to using one computer (a MacBook Pro) docked in either location. There’s a lot that has gone into this decision, which I will detail in a future post, but this post is about one of the most frustrating aspects of this switch: the inability of my computer and automations and settings to understand when the context has changed.

Fortunately, my Mac reacts to switches to my network configuration with aplomb, so I don’t have to dive deeep into Network Locations, though if I did, there are several alternatives to old classic utilities like ControlPlane.

But I record a lot of podcasts, and I have two entirely different USB devices that I use for audio. How can I make it so that when I press the “record” button, the right USB device is recording and receiving audio?

My overall solution ended up being pretty complex, but the key insight was to use a Terminal command to list all the connected USB devices. You don’t need to know a lick of Terminal to use it, because you can stick it in a “Run Shell Script” block in Shortcuts. Here’s the command:

ioreg -p IOUSB -w0 | sed 's/[^o]*o //; s/@.*$//' | grep -v '^Root.*'

This command will output a list of all your currently attached USB devices in plain text.

(Optional explanation: ioreg will display an enormous list of devices and ports on your system. -p IOUSB restricts it to USB devices and -w0 makes it display complete devices one per line, without truncation. That result is sent to sed, which uses a regular expression to match just the names of the attached USB devices. The final step is to use grep to remove the initial line, which summarizes the list rather than listing an actual USB device. You don’t need to know this.)

At that point, you can build a Shortcut that alters its behavior based on the contents of the output. For example, my Shortcut’s next step is an If/then block that checks to see if the result contains the text “MV7” for my Shure MV7 microphone or “USBPre” for my USBPre2 audio interface. But it’ll work with any USB device that happens to be attached at a given time.


By Jason Snell for Macworld

2025 will be the year of Apple Intelligence–again

As a sports fan, I’m besieged with ads for gambling these days. Sports media is full of experts that are happy to claim they know who’s going to win and who’s going to lose, but of course, if they really had all the answers they’d be rich and not flogging their predictions.

What I’m saying is, nobody knows anything. And while I’ve been covering Apple since time immemorial (okay, the 1990s) and predicting in this space for a decade, let’s just say that nobody’s perfect.

Still, it’s fun to think about the blank canvas that 2025 offers us. Here are my predictions for what’s to come in the next year. As always, no wagering.



Listen to Dr. Jones

“Raiders of the Lost Ark” is one of my favorite movies (and Steven Spielberg’s best, in my opinion), but I never knew about this “loose end” until I read about it in a compilation of deleted scenes:

A plot element involving the Ark of the Covenant was cut from the film and is only hinted at during the finale when the Ark is opened. Basically, there were 2 rules about the Ark not mentioned in the final cut of the film:

  1. If you touch the Ark, you die.
  2. If you look at the Ark when it is opened, you die.

This is first explained in additional dialogue for the scene when Indy and Sallah visit Imam. Before translating the writings on the headpiece that give the height of the Staff of Ra, Imam warns Indy not to touch the Ark or look at it when it is opened….

Notice that nobody ever touches the Ark throughout the rest of the film until the finale.

This scene is screenwriting 101 in that it properly sets up the rules of the Ark, so that when Indy shouts, “Don’t look at it, Marion!” at the film’s climax (spoilers for a 43-year-old movie, I guess) we understand why.

But Steven Spielberg had gone way beyond Screenwriting 101, even in 1981. He knew there was literally no need to include these rules, which turn the Ark from an unknowable supernatural object into something more mundane. And so that scene is cut out, and at the fateful moment when the Ark is opened by Belloq and a crew of Nazi officers, noted archaeologist Dr. Indiana Jones just knows that he and Marion need to close their eyes and not look upon the fiery judgment of the God of the Old Testament.

Sometimes less is more.



By Jason Snell for Macworld

2024 predictions in review: Apple Intelligence didn’t surprise, but the M4 sure did

Hi, it’s me, Father Time. This is my 421st More Color column, only about 20 of which have been part of my annual attempt to predict the future while blaming myself for failing to properly predict the year gone by. (Look, if I got them all right, 421 of the columns would be prediction based!)

To the task of predicting, I bring years of experience. The Mac’s 40 years old and I’ve been associated with Macworld for 27 of those years, which is, uh… two-thirds of the Mac’s existence? Geez. You know, I was the summer intern once! (Kurt Cobain was alive then.)

Anyway, who better to listen to than Father Time, just before he turns into a New Year’s Baby? Next time I’ll once again fearlessly attempt to predict the future, but for this column we’re all going to point at laugh at my failures… or will we? Maybe I had a good year in 2024. Let’s see.


The long, strange story of Audio Hijack

Audio Hijack is a thing of beauty. And we almost lost it.

Back in the early Apple silicon era, must-have Mac audio tool got a serious downgrade via a new installation process that required many steps and multiple reboots. The problem wasn’t really addressed until earlier this year, when Apple rolled out a bunch of audio permissions that allowed Rogue Amoeba to ship a new version of the app that didn’t require an installer or any reboots.

Now Rogue Amoeba leader Paul Kafasis tells the story:

In 2020, the disaster foreshadowed literally one sentence ago struck. Beta versions of MacOS 11 broke ACE, our then-current audio capture technology, and the damage looked permanent. When we spoke briefly to Apple during WWDC 2020, our appeals for assistance were flatly rejected. We spent weeks attempting to get ACE working again, but eventually we had to admit defeat. ACE as we knew it was dead in the water, and all options for replacing it involved substantial reductions in functionality. Though we did not discuss it publicly at the time, things looked grim for the future of our products.

Kafasis ascribes Apple’s change of heart to a passionate user base and a bunch of other developers (“some quite large,” he writes) who relied on Rogue Amoeba’s technology for their apps. Still, it took years for Apple to roll out a solution.

It’s scary to think that one of the Mac’s best apps could’ve been completely wrecked by Apple’s architectural changes. I’m glad that in this case, Audio Hijack and Rogue Amoeba’s other apps weren’t a victim of Apple’s out-of-balance attempt to lock down macOS without providing appropriate alternatives to fulfill the needs of users and developers alike.


Intelligence .2 arrives and Jason’s next laptop phase

We give an overview of our thoughts on iOS 18.2 and macOS 15.2; Jason’s got a new laptop but there are still some problems to solve. (More Colors and Backstage members, our hourlong monthly Q&A is also in this episode.)

[The podcast will be off the next two weeks. Back January 3!]



By Jason Snell

Appearance: Apple Intelligence’s generic humans

Choose from three different collections of appearances.

On Wednesday iOS 18.2 and macOS 15.2 were released, and we wrote about it here. One of my complaints is that you can’t just make generic figures of people—you have to choose actual people in your library.

This is wrong. You can do it—I just completely missed the feature, because it wasn’t positioned or labeled in a way that made me understand what I was looking at. In the interest of correcting the record and also informing people whose brains work like mine, here’s the deal:

To create an image in Image Playground that doesn’t use the face of someone you know, click or tap the Choose… button with the word Person above it. This will bring up the person picker, full of faces from your Photos library. But in the top left corner of the picker is another option, Appearance.

Appearance dialog

The first time you click or tap Appearance, you’ll be prompted to choose a default appearance for your creations. (In subsequent creations, it’ll default to your previous choice, but you can change it by tapping Edit.)

In the Appearance view, you’ll be able to choose from five different skin tones and then from three different collections of Appearances. This is where it gets a little weird: Rather that building a person, Memoji style, you choose from three different collections of thumbnails, which are the various Appearances that might come up when you generate an image. They appear to be weighted by gender, so the leftmost is (mostly?) women, the rightmost men, the middle more of a grab bag—but there’s a lot of variation between each collection of Appearances.

Some appearance choices

That’s it. When you select a set, you’re ready to create an image. Unlike images generated from one specific person, you’ll find that different generations will be very different in this mode, because the entire appearance of the person can vary. As I swiped through a single set, I found young women, older women, young men, and people of various racial groups. It’s a grab bag. It’s meant to be generic. Go with it.

They all passed (or failed?) the Kobayashi Maru.

Once you’ve picked the Appearance, you can still add all the other prompts you want, either picked from Apple’s suggestion list or from your own terms. (I generated a bunch of starship captains with laser guns in space, enough to pack an entire star fleet.)

Choose your Genmoji base appearance.

In Genmoji, instead of picking an Appearance, you pick an “Emoji” as a starting point. You can choose between the generic female, non-gender-specific, or male emoji templates, and choose a skin tone. All the genemojis you create will be based on that template.


Video

December Backstage Zoom: Apple Intelligence and more

We got together with Backstage pass members live on Zoom earlier today to discuss all sorts of stuff related to this week’s Apple media event.

We’ve embedded the video below, or you can watch it on YouTube.

Thanks for being a Six Colors subscriber!


By Jason Snell

Mic Drop mutes your microphone everywhere

Mic Drop interface
Mic Drop offers temporary floating status warnings and a persistent Menu Bar item as options.

I’ve gotten so used to having a physical mute button on my podcast recording setup that it’s quite disconcerting when I use a different setup that doesn’t offer one.

In the winter months, I work most of the time in “Studio B”, a second desk in a back bedroom that’s climate controlled in a way my drafty garage can’t be. My microphone here is a sturdy Shure MV7, but I’m connecting it via USB and its mute button is a capacitive circle that’s super awkward to reach.

As a result, I’ve been trying to find a simple way to mute that microphone using a similar gesture—push to mute, push again to unmute—that I use in my primary recording setup. After exploring a bunch of options, I’ve settled on Mic Drop, a free ($5 upgrade for pro features) Mac utility that does the job perfectly.

Mic Drop literally does everything I expected from it. It lives in your menu bar and has support for global hotkeys or AppleScript, optional audio and multiple visual notifications of mute status, the ability to choose which mics are muted and which ones aren’t, and even an optional push-to-talk toggle mode.

It’s a delight to find an app that doesn’t just do the basics, but (via its recently-released 2.0 update) offers all sorts of polish that elevate into a utility that’s truly worth recommending.

MicDrop is available in the Mac App Store.

(Update: It now supports Stream Deck natively, too, which is awesome.)


By Jason Snell

Simply brings its piano app to Vision Pro

I took piano lessons when I was a kid, and always hated practicing. I can blame it on the cold room we kept the piano in, but part of the reason I hated it was that most of what I played was boring. (I didn’t love having to take a long school bus ride to my piano lessons, either.)

This week I took Simply Piano for Vision Pro for a spin, and it was anything from boring. The popular iPad app for teaching piano has come to Vision Pro, and so I sat down at the very same piano I used to practice on as a kid—it’s in a somewhat warmer room now—but with a Vision Pro over my head.

Simply Piano works by listening to you playing notes and detecting if you’re playing the right or wrong ones. It’s very clever, but the Vision Pro version adds in the ability to overlay a virtual keyboard on your real one, so it can provide visual cues (in the form of glowing notes) when you’re not sure which key to play. It also annotates your fingers, so you can see which fingers are supposed to play which notes.

Even all these years later, I’ve got sight reading skills above Simply Piano’s introductory lessons, but as I went through the introductory lessons I found that the visual augmentation did feel a bit like magic. If I had any complaints, it was that sometimes Simply Piano struggled to recognize when I was playing notes, especially two of the same note in quick succession.

The Vision Pro app also comes with a virtual keyboard feature that allows you to practice on any flat surface, by overlaying a keyboard and letting you play it. I like this idea a lot, but in practice I found that it didn’t work very well. It played incorrect notes and even played notes when my fingers weren’t down. I think there’s something here—and having the ability to practice piano when you don’t have a keyboard handy is an amazing idea!—but I gave up pretty quickly and went back to my real piano.

Maybe if I had an iPad, or a Vision Pro, I would’ve practiced the piano more faithfully back in the day. Or maybe not. But this app seems to do a pretty great job of teaching the basics of piano, no teacher (or long school bus ride) required.


Rise of the laptops

Holiday deals and holiday Mac purchases; the rise of Mac laptops and the fadeaway of Mac desktops; Dan uses the command line to fix his lights.



Search Six Colors