I’ve been using an Elgato Stream Deck for more than a year now. It’s a USB peripheral that offers a grid of buttons with a display underneath, so each button can be labeled with an icon and/or text that you specify. The goal of the Stream Deck is to make esoteric actions on your computer easier by letting you place them on dedicated keys with custom artwork, so you’ll always know to press the blue button instead of typing Command-Shift-Option-3.
I was initially quite skeptical about the Stream Deck. I’ve got a perfectly nice keyboard, full of keys on which to map commands. Why not just memorize those keyboard shortcuts?
And yet, after using a Stream Deck Mini that I bought at Target on a whim for a few months, I decided to upgrade to the full-sized Stream Deck. It turns out that, yes, the concept of wiring up commands I could never recall from the keyboard shortcuts, of placing front and center all the macros and shortcuts and scripts I spent hours building and then promptly forgot existed, made it all worthwhile. I had gone from a skeptic to a convert, and it only took a few months—and a bunch of lessons learned.
The original HomePod flopped. But Apple wasn’t discouraged. It reconceived the product and released the HomePod mini, a less high-tech-and more affordable-iteration that seems to have been more successful in the market and suggested Apple had larger home ambitions.
Now come rumors that Apple’s planning on building a new HomePod. In fact, there are rumors that as many as three new HomePods could be headed our way. Could it be, in the same year that the iPod finally faded away, that the HomePod will transform itself from a failure into a key Apple product line?
It’s time. Not just for a HomePod comeback, but for HomePod domination. Two HomePods? I want more. I want four. It’s time for Apple to rush back into the smart home game and the HomePod can lead the way.
Corey was the glue that held all of us here at the factory together. He designed the original factory logo, created dozens of beloved icon sets and somehow managed to help guide the company through thick and thin…. Corey was instrumental in the growth and success of the company from the very start. He loved participating in judging our annual icon design contest, Pixelpalooza, back in the day. His keen eye for design touched nearly all of our products and made them infinitely better including IconBuilder, xScope, Twitterrific and Triode. He even created Linea Sans, a wonderful font based on his own handwriting that users will be able to enjoy for years to come.
Condolences to all of Corey’s family members, colleagues, and friends.
The much-rumored Apple AR/VR headset appears to be on the horizon, so Myke and Jason break down several reports about its development and debate what approach Apple should take. And as WWDC approaches, there are also rumblings about a new HomePod, Apple Watch design changes, and more Apple displays.
There I was, trying to type a Croatian phrase into Google Translate when I found myself wondering “Wait, there must be an easier way to enter characters with diacritic marks than by trying to guess the right combination of Option and letter key.”
Friends, there is. File this one under one of those tips that I’d either forgotten or never really realized, in large part because my history steeped in the classic Mac OS meant I always thought of looking these things up via Key Caps.1
A quick search turned up this macOS support article, which points out that—in a move borrowed from iOS—all you have to do is press and hold a key to see every available character associated with that key. More to the point, each character has a number associated with it—hit the corresponding key, and it’ll pop it right in there for you.
Certain letters with diacritics I’ll always have hardcoded in my memory (Option-e plus a letter for accent aigu, for example, or Option-c for ç—thanks years of French homework!) but for characters I more rarely need to type, this definitely speeds up the process.
Which does still exist as Keyboard Viewer, though you have to jump through the hoops of enabling the Input menu in System Preferences Keyboard or as part of the Accessibility options before you can bring it up). ↩
[Dan Moren is the East Coast Bureau Chief of Six Colors. You can find him on Twitter at @dmoren or reach him by email at email@example.com. His latest novel, The Aleph Extraction, is out now and available in fine book stores everywhere, so be sure to pick up a copy.]
The lead-up to Apple’s annual Worldwide Developers Conference is always rife with rumor and speculation. But so far this year leaks have been few and far between and most of what has trickled out into the public eye has been on the vague side. Take, for example, Bloomberg’s usually very well-sourced Mark Gurman, who said last week–with nothing more in the way of explanation–that iOS 16 would contain some “fresh Apple apps.”
Let’s assume for a moment that this isn’t merely a resurgence of 1990s slang and that the apps in question aren’t “funky fresh,” but rather that the company is intending to roll out new and/or updated versions of some of its built-in apps on iOS. That certainly sounds promising and, as you might imagine, I have some ideas of exactly what that could (or should) entail.
What’s this? Supply-chain analyst extraordinaire Ming-Chi Kuo has suggested that Apple’s investigating E Ink displays for future foldable iPhones. Now, Apple surely investigates lots of things-and most of them never make it across the finish line to become real products.
But as a long-time admirer of E Ink as a technology, I’m excited about the possibility that Apple might use it in future devices. E Ink is a niche technology with some very real limitations, but it’s also got some huge advantages.
For the second year running, Apple has offered a preview of updated accessibility features coming to its platforms later this year. The announcements come just ahead of Thursday’s Global Accessibility Awareness Day, which goes by #GAAD online.
The preview is notable for spotlighting features that most people won’t use, but that matter a lot to those with disabilities. It’s also notable because it’s a departure from the company’s typical close-to-the-vest approach to what’s coming on the roadmap.
Here’s a look at what Apple announced, and why it matters:
Door Detection. LIDAR-equipped iPhones and iPads have had a feature called People Detection since iOS 13. Using the Magnifier app and the VoiceOver screen reader, a blind user can learn whether the device camera sees a person, and where that person is in relation to the device. That was handy for social distancing. Door Detection will use the same mechanism to alert you when the device identifies the presence of a door. That’s a more practical use of LIDAR for many blind and low-vision users than even People Detection, both indoors and out. Door Detection can tell you about the door’s attributes, including whether it’s open or closed and if there’s a doorknob. Apple says you’ll also be able to read signage, like a room number or a notice on the door. I presume that’s just an application of Live Text, but it’s a great companion for Door Detection.
The use of LIDAR in accessibility has always felt like a bit of a preview of what we might see in future Apple glasses or headset, and it’s encouraging for users of accessibility features that the company is potentially taking their needs into account as it develops future wearables. My hope is that LIDAR sensors, only available in the highest-end phones and iPads, will come to more of the iOS product line. For a blind user who doesn’t buy a phone based on high-end camera features, doing so just to get access to LIDAR-based accessibility features is a tough sell.
Live Captions. Apple joins Google, Microsoft and Zoom, among others, in offering live captions, but they’re global on iOS and macOS, so you can use them in any app with audio output. That’s the superpower here. Just pick up your device and enable captions, whatever you’re doing. Deaf and hard-of-hearing people often bemoan the state of auto-generated captions, so some testing will be warranted.
Watch accessibility improvements. Last year’s May accessibility preview, which I covered on my podcast, Parallel, brought AssistiveTouch the the Apple Watch. It’s a longstanding iOS feature that provides a simplified way to perform gestures for those with motor disabilities. This year, there are more watch gestures, called Quick Actions, and a new feature called Apple Watch Mirroring.
If you have a motor disability, Quick Actions gives you the choice to make hand gestures instead of manipulating the watch screen. An all-purpose “double-pinch” gesture will answer a call, control media playback, or take a photo. Mirroring is like AirPlay for the Apple Watch, sending the watch screen to your phone. That’s also useful for people with motor disabilities who can more easily use the phone screen that the smaller, more inconveniently-located watch face.
I’m intrigued by the possibilities for low-vision users, too, because the phone screen is sometimes far easier to use at close range and in zoomed mode than the watch. And you can use AssistiveTouch or Switch Control, if that’s how you interact with your phone.
Buddy Controller. Turn two game controllers into one, so two people can play a game together, with one able to assist someone with a physical disability who has difficulty manipulating some or all of the controller’s features.
Siri Pause Time adjustment. If your speech is slow or impaired, having a little extra time to complete your thought before Siri tries to act on it could make it a more useful tool.
Customization for Sound Recognition. Introduced in iOS 13, Sound Recognition allows your device to listen for sounds in your environment, like water running, a baby crying or a siren, and then notify you with a visual alert. It’s a useful tool for getting the attention of someone who’s deaf or hard of hearing. But you’re currently limited to one of 15 sounds. It’s a good list, but what if a sound you needed to know about isn’t on the list? Apple says that later this year, you’ll be able to record and save sounds you’d like to use with Sound Recognition. (Perhaps you have a unique doorbell or an appliance with a special trill?) Customization probably should have been part of Sound Recognition to begin with, but it’s common for Apple to roll out a totally new accessibility feature, then build its capabilities over time.
Apple detailed a few other new features on Tuesday, including 20 more languages for the VoiceOver screen reader and Voice Control Spotlight mode, which you can use to dictate custom word spellings.
Big Deal or No Big Deal?
This is a nice grab bag of features, with Door Detection and the Apple Watch updates offering the most intriguing possibilities. It’s also possible there are more where these came from, as occasionally happens when the late betas start to become available.
On Monday Rogue Amoeba released SoundSource 5.5, the latest version of its handy Mac sound-routing utility that—let’s be honest—is doing all the heavy lifting for a feature that should probably be a core part of macOS. (Apple doesn’t seem to really care about Mac audio, and that’s good for Rogue Amoeba’s array of products.)
As you might expect, I immediately dove into SoundSource’s new automation tools. Most of the time, I listen to music on AirPlay speakers and most system audio goes through my Apple Studio Display’s speakers. But when I’m writing, I often prefer to pop in my headphones and get focused.
That process takes several steps. I have to click on the AirPlay icon in the Music app, so that the audio stops using AirPlay. Once the audio is coming out of the Studio Display, I need to use the Sound preference pane to redirect the audio to my Mac Studio’s headphone jack—or use SoundSource to intercept Music and send it to the headphone jack.
But thanks to SoundSource 5.5, I’ve created a shortcut that automatically toggles between those two states, and assigned it to a Stream Deck button. Here’s how it works:
Using a new SoundSource action, I’m detecting where the app is currently routing audio for the Music app. I’m using this data point to determine whether I’m toggling my intense headphone-listening settings on or off. If the Music app is set to External Headphones, the shortcut knows I’m listening there, and so the shortcut will use an If block to set the audio input back to my AirPlay speakers.
The first two steps are straightforward: For customizability, I’ve added a text block with the name of the AirPlay speakers I’ll be using. Then the shortcut uses SoundSource’s Set Source Device block to set Music back to its default state (outputting through the default audio device), and then—mostly to prevent some blaring audio artifacts during the switchover—waits for a second before executing an AppleScript script.
That script is hairy, because getting Music to change AirPlay sources via scripting is hairy. (Note to Rogue Amoeba: If Apple won’t make this more easily automated, maybe you could?) I found a solution to the problem in this Mac OS X Hints Entry from 2013 by iTunes/Music scripter extraordinaire Doug Adams, and adapted it to my needs. (Doug’s script asks you to pick an AirPlay source, so I omitted that portion.)
The Otherwise portion of the shortcut basically does the reverse action in the toggle—it uses another copy of that AppleScript script to set the AirPlay target to my computer, then uses SoundSource’s Set Source Volume and Set Source Device to get Music playing in my headphones at an appropriate volume.
Pretty easy, other than having to dig up a good way to change the AirPlay targets in Music. (Thanks, Doug.) And now I’ve got a button to press to do a bunch of dumb tasks that I used to have to do myself. This was the biggest itch I wanted to scratch with this SoundSource 5.5, but I’m sure plenty more will present themselves. Now that I can automate all my Macs inputs, outputs, and individual app audio routing, the power’s in my hands—and my shortcuts.
I made a mistake today. I posted today’s episode of Downstream with the feed pointed at the previous episode of Downstream. There are various workflow reasons why this happened, but the bottom line is: I pasted last week’s download URL in and then didn’t change it, resulting in everyone getting the wrong episode. Oops.
So, in the aftermath of fixing that error, I tried to figure out a way to prevent it from ever happening again. I took inspiration from a different system I use to post a different podcast to do it. Both approaches have one utility in common: Keyboard Maestro. (You could easily use another macro utility such as TextExpander if you wanted.)
I use the original macro to post The Incomparable. In that case, I am provided a unique ID string from my host, ART19. ART19 generally prefers to host podcast RSS itself, but I didn’t want to do that, and the ART19 download URL can be derived from the unique ID that displays on the webpage used to post a new episode.
I could type that URL by hand and then paste in the unique ID, but the chances for error in doing that are extremely high. If I mistype a character, or copy a previous episode’s ID and forget to change it, I’m in trouble!
So I created a Keyboard Maestro that fires when I type the string ;art19 and replaces that text with `https://rss.art19.com/episodes/%SystemClipboard%.mp3`. Now all I have to do is copy the unique ID and type that special string in The Incomparable’s CMS, and there’s no chance I will mistype something and mess it up.
For Downstream it’s a little more complicated. Every file is named the same, with only the episode number changed. Rather than copy and paste it from the previous episode and increment by one, I decided to steal some code from my Template Gun AppleScript app and check Downstream’s RSS feed to determine what the current episode number is:
set theFeed to (do shell script "curl https://www.relay.fm/downstream/feed")
set theFeed to (characters 1 thru 2500 of theFeed) as string
return characters ((offset of "<itunes:episode>" in theFeed) + 16) thru ((offset of "</itunes:episode>" in theFeed) - 1) of theFeed as string
Keyboard Maestro places the result of that script in a variable I called epnum, and then when I type ;downstreammp3 it replaces what I typed with https://traffic.libsyn.com/secure/relaydownstream/downstream%Variable%epnum%.mp3.
(Update: Sure, you can do this in Shortcuts. Here’s a link to the same process as a six-action Shortcut.)
That’s it. Now I’ve transformed two instances of perilous typing in my life that require me to know a specific URL pattern to simple auto-expanding macros.
After years watching the old Netflix cruise along as the top streamer, things are getting interesting as it shifts gears and engages the realities of today’s streaming scene. We discuss Netflix changes and Julia reviews Disney’s financial results.
Using advancements across hardware, software, and machine learning, people who are blind or low vision can use their iPhone and iPad to navigate the last few feet to their destination with Door Detection; users with physical and motor disabilities who may rely on assistive features like Voice Control and Switch Control can fully control Apple Watch from their iPhone with Apple Watch Mirroring; and the Deaf and hard of hearing community can follow Live Captions on iPhone, iPad, and Mac. Apple is also expanding support for its industry-leading screen reader VoiceOver with over 20 new languages and locales. These features will be available later this year with software updates across Apple platforms.
The announcement is in honor of Global Accessibility Day—last year, the company made a similar announcement, previewing forthcoming features like Assistive Touch feature for Apple Watch and Background Sounds for iOS 15.
These new features really open up a lot of possibilities, but the one I’m most excited about is Live Captions. Apple’s had a version of this technology in its Clips app for some years (and clearly makes use of similar functionality with Siri’s language processing and dictation). But on Apple’s platform you previously needed to turn to a third-party app for something like captioning a FaceTime call for deaf or hard of hearing users.
As someone who has two parents who both have difficulty hearing, this stands to be a big help. I am curious to see how well the feature actually works, and how it handles a big FaceTime call with a lot of participants; Apple says it will attribute dialog to specific speakers. Live Captions is also supposedly available to any audio content, which means other video conferencing apps may be able to take advantage of it as well—though it’s unclear whether that means through an opt-in API or just by default.
In addition to these major feature announcements, Apple’s press release mentions a number of other improvements, such as new Apple Books themes to make it easier to read text, Siri Pause Time to allow users to specify how long Siri will wait before responding to a request, and an improvement to Sound Recognition that lets you train it to listen for a specific version of a sound (i.e. your particular doorbell), and more.
Is Apple ready to embrace USB-C across its entire product line? Jason loves his Playdate, but is frustrated by Apple Music playing songs he dislikes. And the music may go on, but the iPod won’t be coming along for the ride.
The larger technology companies get, the more and more commonalities there seem to be between their products. That’s probably not surprising: after all, if only a couple of huge companies are developing smartphone operating systems, chances are they’ll get closer and closer over time as companies borrow from each other, playing leapfrog as they continually innovate.
Like any giant company, Apple’s no stranger to having features similar to those in its products rolled out by competitors. But it’s also hardly one to ignore a good idea, even when it’s created by a rival (for example, the graphical user interface on desktop computers).
This past week, Google held its annual I/O developers conference, at which it showed off a ton of new devices and features for its products. And, as always, there were those who noted that many looked like they’d been pulled directly from Apple products. So, turnabout being fair play, here are a few places where Apple might be able to take a cue from one of its biggest competitors.
Up until recently, USB-C was more of a fluke in my household—a strange visitor from a possible future, in which we all used small, reversible plugs. Sure, my iMac had a couple of Thunderbolt ports that use the USB-C ports and every once in a while a random cable might have a plug on it, but by and large we remained a good old USB-A household.
Even by late 2020, when I bought a new M1 MacBook Air that had only USB-C ports, the connector was still more of a curiosity than something in daily use. Truth be told, I didn’t plug many things into my laptop, so I wasn’t even really living the Dongletown lifestyle. I did have to buy a USB-C-to-USA-mini cables in order to use my ATR-2100 travel mic while I was on the road, but as the Air arrived during the pandemic, I wasn’t even really traveling.…