By Jason Snell
April 30, 2020 5:04 PM PT
The future of help
Software is hard to use. This has been true since the moment the first computer was switched on and it’s still true. Yes, my daughter navigates the mind-bending Snapchat interface like a dolphin swimming through warm Hawaiian waters, but the other day I still had to walk her through a complicated cache-clearing troubleshooting session on her Chromebook, which in turn unearthed a DNS-caching problem on our home network. And all because Netflix wasn’t loading correctly.
And isn’t that almost everyone’s experience? You get really good at a few key things, but so much else is complicated and mysterious unless you’re a tech expert, like many of you are.
There was a time when the tech industry recognized that it was making complicated and frustrating software and, in a rare moment of introspection, decided to fix the problem. The result, unfortunately, was a wave of stuff that has either been forgotten or is remembered ruefully. Microsoft’s Clippy assistant and Microsoft Bob. A slew of “Wizards” to walk you through tasks. Anyone remember Apple Guide? These were all attempts to have software hold a user’s hand and walk them through places where software had previously failed them. Shock reveal: The new software also failed the users, for the most part.
The worst of it is that it seems like everyone’s just given up. Designers try to make their products usable and discoverable when they can, but the bar is pretty low. People love apps in part because they’re (generally) so simple, but even they can be confusing and obscure.
We need to try again, not give up. Sometimes technology is so complex that it can’t be boiled down into a simple, discoverable interface. To use that technology effectively, someone has to teach you how to use it. If you don’t have a teacher at hand, our technology should offer to teach us. We’ve tried it before, but it’s time to try it again.
We’re all still trying to figure out just what the Touch Bar on the new MacBook Pros is for, but the other week it struck me that it might be a great interface for teaching people how to use software. It’s interactive, so it can customize itself as it watches you work. It can work in concert with tutorials to show off different ways to use software. Every button can be labeled with clear text explaining what will happen when you press it. I don’t think I’ve seen any examples of the Touch Bar being used to help teach people how to use more complex software, but there might be an opportunity there.
An even bigger opportunity, in the long run, might be the intelligent assistants that are being stuffed into almost every device we own. The most useful interactions I have with these assistants come when they save me steps—in other words, when they take complexity and boil it down to simplicity.
When I hold down my iPhone’s home button and say, “Set an alarm for 5am,” I’m saving a trip to the home screen to find the Clock app, switch to the alarm tab, make a new alarm, and spin the clock dials to get the right time. When I press the crown of my Apple Watch and say “start outdoor run,” I’m avoiding a visit to the Apple Watch honeycomb to figure out which green icon represents the Fitness app. When I stand in the kitchen and say “Alexa, set a timer for 15 minutes,” I’m keeping my hands free to cook rather than needing to wash them off and then try to navigate a touchscreen interface with damp fingers.
There’s a lot of potential here—not just to teach, but to eliminate the need to teach. If I get an email from a bozo, shouldn’t I be able to say, “Hey Siri, send all emails from this guy to the trash.” And if the assistant needs more information, it should ask until it has enough! “Do you want me to filter past emails from this bozo, future emails, or both?” In the end, do I need to know about email filtering interfaces? No, I just need to give orders to my phone. (By the way, Apple Mail for iOS still doesn’t offer mail rules? Come on!)
This can extend deep into complex systems. Apple brought Siri to macOS last fall. In the future, shouldn’t I be able to say, “Siri, when I use Logic I want the Strip Silence command to use Control-X as a shortcut” and have my preference noted automatically? And consider automation. I might never write a script in my life, but if I ask Siri to resize a photo, save it as a JPEG, and open it in Photoshop, shouldn’t it be able to figure out what I want? And if I do it more than once, shouldn’t it remember that this is a series of steps I often perform? And if I don’t use Siri, but I perform a repetitive task enough times, shouldn’t Siri offer to create some sort of shortcut to get my work done faster?
I realize this is all complex stuff. But isn’t this the promise of technology? I’ve bodged together AppleScripts and Automator actions with the best of them, but in the end I don’t really care about the method by which my work gets simplified—I just want it to be as simple and frictionless as it can be. Right now, I have to build some of those tools and connections myself. But these are things computers should be good at. Apple, Microsoft, Google, and others should always be striving to improve simplicity and usability and eliminate complexity. Clippy may have sucked, but that’s no reason to give up. We can, and should, do better.
This is a Six Colors members-only story that's been unlocked for all to read.
Become a member for access to exclusive articles, a members-only podcast, and other benefits.