This week's sponsor
All Natural Contact - The user friendly, time-saving way to keep up with your team's most important contacts.
By Jason Snell
December 28, 2016 3:49 PM PT
I produce podcasts featuring different people using different microphones in all sorts of different homes, which is to say that the nature of the sound files I receive from my panelists can vary widely.
My goal is to make everyone sound as good as possible for the benefit of the listener—and eliminate telltale background noises that would come and go as different people speak. As a result, I spend a lot of time (and have spent more money than I’d expected) trying to remove noise from people’s audio files.
This sort of stuff isn’t for everybody—you don’t need to buy expensive software and spend a half an hour or longer processing all of your audio files in order to make a good podcast. (Also, in most cases the best long-term solution is to get your panelists to improve their equipment or technique, not to fix it in post.) In fact, there are times when I wonder if all the work I put into the removal of noise from audio files is something listeners even notice. But I notice. And I do think getting the noises out improves my podcasts.
Anyway, there’s a lot of software out there that will let you remove noise from your podcasts. Most of them work the same way: you “train” the software on a portion of the audio that contains only the noise you want to remove, which is generally a moment when your subject isn’t talking. In that moment of personal silence, the recording is pure noise: the whirr of a laptop fan, the buzz of a heater, and the hiss of a microphone that does a very good job of picking up room noise.
If you’d like to try this out, consider Audacity, which is free and offers a de-noising plug-in. Another option is the $149 SoundSoap. Adobe includes a de-noising effect with its audio-editing app Audition. As for me, for the last year or so I’ve been using the $249 iZotope RX 5, which is a combination of audio utilities that let you de-noise, de-hum, and de-reverb audio.
Here are some before-and-after samples. We’ll start with a particularly noisy track from my pal David J. Loehr, which may have actually been recorded in a hotel room, not his usual location. From the waveform, you can already tell this is a noisy track: The big spikes are when David is talking, but when he’s not talking there’s still a pretty thick line. That’s the sign of background noise. (There’s also a big empty gap in the middle; that’s when David muted his microphone entirely.)
iZotope RX 5 also provides a second way of visualizing audio, which is via an orange-tinted interface that indicates noises at specific frequencies. That’s most visible across the bottom of the screen. Those solid bars are background hums—they sit at specific frequencies and just keep on making noise.
Most de-noising plug-ins will take care of background hums, but iZotope RX 5 offers a separate de-hum plug-in that is especially effective at destroying those hums. To remove the hum, I select a portion of the audio that contains the hum and click the Learn button in the De-hum window. Then I select the entire track (or at least the portion of the track that contains the hum) and click Process to remove the hum from the selected area. As you can see in the image below, after I click Process the two orange bars at the bottom of the waveform have vanished from the selected portion of the audio file. That hum has vanished entirely.
While removing the background hum is a major part of the noise-removal puzzle, there’s still other background noise. That’s why I’ll now select a portion of audio and click Learn on the De-noise window. Then I select the entire track (or the portion of it containing the noise I want to remove) and click Process to remove the noise.
As you can see from the image below, the area I processed shows up with the thinnest of waveform lines and appears largely black, with no overlaid orange speckles indicating noise. This “silent” part of the track is now truly silent.
In truth, most of the “silent” portions of my guest’s audio tracks aren’t ever heard by podcast listeners. Whether you use a noise gate or a Strip Silence feature like Logic Pro or Ferrite (that’s my approach), quiet portions of someone’s audio tracks are automatically squelched.
The value in removing noise isn’t making the quiet parts quiet—it’s making it so that the parts in which your panelists are talking don’t also contain hums and other background noise. Even when someone’s talking, there are natural pauses through which the hums and noise can bleed through. If I can remove them from everybody’s audio track, you won’t get distracted when the character of the audio changes dramatically every time someone else starts talking.
The screen shots from iZotope RX 5 are fun, but hearing is believing: Here’s a section of that track from David Loehr, before and after I removed the hum and noise.
I mentioned above that iZotope RX 5 also includes a de-reverb effect. That’s actually the primary reason I upgraded to iZotope from SoundSoap—some of my panelists have very echoey recording spaces. In time, perhaps they’ll change their recording set-up and it won’t be a problem, but I’d like to be able to suppress as much room echo as I can in the meantime.
Musicians add reverb to tracks all the time, but the idea of removing reverb seems kind of crazy. In fact, it requires a whole lot of wacky mathematical modeling of sound decay at various frequencies. But you know what? When it works, it’s magical.
This Christmas, my friend James Thomson joined me (and David Loehr!) for a podcast about the “Doctor Who” Christmas Special. James couldn’t use his usual recording location, however, because his mother-in-law was in town and was sleeping in that room. So he recorded from his kitchen, which was not the ideal recording location. It was a bit echoey.
If you’d like to hear how James’s original audio sounded like, what it sounded like after de-reverbing, and then what it sounded like with de-noising added, here’s a sample file.
Should aspiring podcasters run out and spend several hundred dollars for professional audio software? No. Start with Audacity or, if you’re using Audition, the built-in de-noising features. But if you’re interested in taking the next step—or you’ve got some brutal audio that you need to improve—you’d be surprised at the quality of the results you can get with a little bit of time and some clever software.
By Jason Snell
December 16, 2016 3:17 PM PT
I’ve written about using Ferrite to edit podcasts on iOS, but sometimes a video does a better job of demonstrating how software works. So with that in mind, I edited (or to be more accurate, re-edited) this week’s episode of Clockwise in Ferrite on my iPad Pro and captured the audio and video while I was doing it. The full edit took about 25 minutes, but I’ve compressed it substantially in this annotated video of the process.
(You can see a time-lapse of me editing on the Mac in Logic Pro X if you’d like to compare.)
By Jason Snell
December 2, 2016 10:05 AM PT
There were some podcasters at the Úll conference in Ireland last month, and at one point when we were talking shop I complained again about how iOS doesn’t support files on external storage devices that aren’t photos or videos.
This means that if I travel to record a live podcast using a multi-track recorder like the Zoom H6, I have to bring a Mac with me to offload the files. Oh, sure, I can edit a podcast on iOS with ease, but how to get the files over there?
One of the people at Úll—I believe it was Elias—suggested I try the Toshiba FlashAir Wi-Fi SD card. There have been many Wi-Fi-enabled SD cards—I used an Eye-Fi for years—but this one has an iOS app that actually lets you select any file on the card and open it in any app.
There are a bunch of caveats, as you might expect. The FlashAir app isn’t particularly elegant, but it’s functional. The functionality to open a file in another app via the share sheet is off by default, so you have to turn it on. Wi-Fi cards can suck battery, though the FlashAir turns off its Wi-Fi functions after a few minutes if they’re not being used.
But the upside is tremendous! With this approach I can travel somewhere with only an iOS device and my portable recording set-up, record a live audio session, import those files to my iOS device, and then edit and post that audio session, all from iOS.
Now, this doesn’t get Apple off the hook—its card-reader accessory should really be able to read other file types, and more generally iOS should be able to connect to storage devices and let you see the files, whether they’re photos or Word documents. But it closes another gap for my own iOS-based podcast workflow, and so I’m excited about that.
By Jason Snell
December 2, 2016 9:32 AM PT
One of my recent tech quests has been to find a way to record and edit podcasts when traveling with an iOS device and no Mac. The best approach I’ve found so far—and I’ve used it a few times—is to talk on Skype on an iPhone with a pair of earbuds while simultaneously recording myself on a good microphone on an iPad.
Look, I didn’t say it was a good approach. Just that it was the best one I’d found so far. Though I never travel without my iPhone and iPad, the two-device approach to recording is inelegant to say the least. In addition, the person I’m talking to on Skype hears me through a lousy microphone, and I can’t hear my own voice being returned to my ears. (That’s important, because if you can hear your own voice you can tell when you’re not talking into the microphone, and it makes your own impression of your voice sound less like you’re talking with your ears full of water.)
In testing the Audio-Technica ATR2100-USB for my story about the sub-$100 podcast studio, I realized that I had a better option for iOS-only recording. It’s still clunky, but the person on the other end of the Skype call can hear me clearly, and I can hear my own voice in my ears.
Here’s the trick: The ATR2100-USB is a rarity, a microphone that offers both a USB port, for direct connection to a digital device, and an XLR port, for an analog connection to a mixing board or other audio interface. And you can use both connections simultaneously.
So I attach the ATR2100-USB to my iPad or iPhone with Apple’s Lighting-USB Adapter — the old model will work, my iPhone 7 was able to power the microphone itself, though it’s possible that some models might require a power assist from the newer Lightning-USB Camera Adapter. Once the microphone is attached to the iOS device, it becomes the audio input and output for all apps, including Skype.
I plug my headphones into the headphone jack on the microphone, so I’m getting zero-latency feedback from my own voice as well as hearing the audio from Skype, channeled back from my iOS device.
Once that’s hooked up, all I need to do is record my microphone audio on the recorder while conducting my podcast via Skype. In the end, I’ve had a clear conversation and been able to hear my own voice, and my recorder has a pristine copy of my microphone audio.
There’s one final step—transferring the audio file from my recorder back to the iOS device—which requires more hardware. And this setup still doesn’t let me walk away with a recording of the other side of the Skype conversation, which is useful as an insurance policy in case someone else’s recording fails.
If you don’t already have an ATR2100-USB and a portable recorder with XLR plugs, I don’t think I can recommend that you spend money on this option. But if you happen to have the component parts, like I do, you have a single-iOS-device podcast studio ready to go.
By Jason Snell
November 18, 2016 10:36 AM PT
Podcasting is rapidly becoming an industry, with big money and big companies rushing in. But it is also still what it always was: A place where anyone’s voice can be heard. Anyone can make a podcast and post it to iTunes and, with luck and perseverance, find an audience.
One of the biggest hurdles in making a good podcast has always been the expense of equipment. Audio equipment can be expensive, especially the stuff that’s made (and priced) for professionals. One of the good things about this latest podcast renaissance is that the price of pretty good recording equipment has come down a whole lot lately.
Since I write about podcasting a lot, I get asked a lot about what the right starter set-up should be for a podcaster. To be clear—you could use your iPhone’s microphone or a set of EarPods and record a podcast with no extra investment, and you don’t need to spend a dime to get started. GarageBand is free with every Mac, Audacity is free for everyone, and Ferrite is free on iOS with a couple cheap in-app purchases for extra features.
But if you do want to invest a little bit in a better microphone, where should you put that cash? Here’s my recommendation for how you can get a great set-up for under $100.
At this point my recommendation for a podcast starter microphone is the $79 Audio-Technica ATR2100-USB.
The ATR2100-USB excels at keeping out room echo and other background noise (the stuff that can make podcasts hard to listen to), though that means you’ll need to work on your microphone technique and never stray too far away from the mic, or your voice will fade out rapidly. The good news is, it’s also got a headphone jack in its base, so you can hear your own voice as you speak and get immediate feedback if you stray too far away from dead center.
It’s a really amazing value at $79, and it’s often discounted on Amazon to between $35 and $50. The ATR2100-USB even has an XLR port on the bottom, so if you do end up wanting to plug it into a mixer or portable recorder, you can.
The problem with audio hardware is that you need to buy a bunch of accessories. The good news about the ATR2100-USB is that it already comes with XLR and USB cables, a microphone clip, and a desk stand. You don’t need to buy those.
What you do need to buy is a $3 foam windscreen. The ATR2100-USB requires you to get up close to it (because it’s blocking out room noise and echo!), but getting up close to a microphone can lead to lots of ugly popping sounds from your mouth. The windscreen will help filter those out.
You should also probably buy a $11 shock mount to replace the basic microphone clip that comes with the ATR2100-USB. If your microphone is sitting on a desk or table, you will probably be doing things like typing on your keyboard and bumping the work surface with your elbows. These are noises that you won’t notice, but they’ll reverberate right up through the mic stand and sound like explosions on your recording. A shock mount isolates the microphone so that it floats on a springy set of elastic bands.
Getting it off the table
The prices of all of these products can fluctuate quite a lot, but as I write this, those three purchases meet our goal of staying under $100! If you want to spend a little bit more money, well, there’s always a way to spend more money with audio equipment.
The next purchase I’d suggest is a boom arm or mic stand, to elevate your microphone off of your desk or table entirely. If you’ve got a desk you’re willing and able to semi-permanently mount an arm, buy a boom arm like this one (I haven’t tested that one, fair warning). These arms clamp to your desk (so make sure you’ve got a place you can clamp one!) and generally you can screw on the shock mount you bought above rather than use the microphone clip that comes with the arm.
If you don’t have a permanent podcasting location—I didn’t for years after I began podcasting—consider a stand like this $20 model. I used this stand for quite a while when I was podcasting while sitting on my bed. When I was done, I could just fold the stand up and stash it under the bed.
No matter what your budget, podcasting can allow you to have your voice be heard. And if you do want to spend $100, you can have your voice sound that much better. The choice is up to you—but you don’t need to lay out a whole lot of money regardless.
[Thanks to Antony Johnston, author of the Podcast Guest Guide, for the topic suggestion.]
By Jason Snell
August 19, 2016 2:30 PM PT
Since then I’ve discovered a few new facts worth mentioning:
Auphonic’s got an iOS app, Auphonic Recorder. It’s iPhone only and designed mostly for audio recording, but it contains a share extension that allows me to export from Ferrite and immediately upload to Auphonic, without using something like Dropbox as an intermediary. If I’m using an Auphonic preset I’ve previously configured, it will even automatically begin processing my project using those settings once the upload is complete.
The $10 app TwistedWave Audio Editor will export in MP3 format, upload to Dropbox or a server via SFTP, and supports detailed MP3 tagging.
Depending on my needs, I could see myself using either of these tools. If I want to to audio post-processing and have a bit more fiddly control over every aspect of my tags, Auphonic will do the job. But TwistedWave seems to do the job when it comes to encoding and tagging.
By Jason Snell
August 10, 2016 3:15 PM PT
Last weekend my wife and I took a quick car trip to Ashland, Oregon to catch some plays at the Oregon Shakespeare Festival. The night before we left, I realized I hadn’t edited that weekend’s episode of The Incomparable yet, and I didn’t want to bring a laptop with me.
No problem—as I’ve written about before, I have used Ferrite Recording Studio to edit numerous podcasts over the past eight months or so. I can’t recommend it highly enough if you want to edit podcasts on iOS.
There’s just one thing: Ferrite won’t export projects in MP3 format1. Neither do many other iOS apps, and the reason is that MP3 encoding is still encumbered by patents. Any app that builds in MP3 encoding is risking a bill of thousands of dollars from some of the patent holders—and so most of them just don’t do it. I’ve searched for an iOS app that would encode my audio into properly tagged MP3s, ready for uploading to my server, but have come up empty2.
Instead, I turn to the web service Auphonic. Auphonic is free for two hours per month of processed audio, and charges for additional hours of encoding—I bought 10 hours of credits for $22, for example.
Getting my file from Ferrite to Auphonic is a little bit tricky. I export a file from Ferrite and instruct the app to save it to Dropbox. My iPad then uploads the file to Dropbox via the Dropbox app. Once that’s done, I use Dropbox’s Sharing feature to generate a link to the file, and tell Auphonic to use the contents of that URL as my audio source.
Within Auphonic, I can set show art (which I can upload directly from my Dropbox via Safari using iOS’s document-picker interface), tags, and even chapter markers with time codes, as well as the bit rate and file format of the final file. Auphonic also offers optional audio processing, creating a more level volume and reducing noise across the final track. Finally, you can add your own servers—SoundCloud, Libsyn, and any old server via SFTP—to your Auphonic account, and set Auphonic to automatically upload the result once it’s done processing the file.
I was able to export and upload The Incomparable while sitting at a comfortable table in an Ashland pub, drinking their beer and using their free Wi-Fi. Auphonic did the rest, re-encoding the file as an MP3, tagging it properly, and uploading the result to both my Libsyn account and to The Incomparable’s FTP server. When it was all done, I received an email alerting me that the entire process was completed. (It took a couple of minutes, start to finish.)
I wish there were a tool on my iPad that would do everything that the Auphonic web app does, but that may be impossible as long as the MP3 patent remains intact. Fortunately, as far as I can tell the final patents covering MP3 encoding will be expiring in 2017, at which point I’m sure Ferrite (and other tools) will add that feature. In the meantime, Auphonic is a solid and affordable alternative.
Yes, I could just post certain episodes of my podcasts in AAC format (and Ferrite will tag them), but I’d rather stay consistent, and it’s possible there are still some podcast clients out there that don’t like the AAC format. ↩
By Chip Sudderth
May 23, 2016 10:41 AM PT
Hobbyist and professional podcasters alike depend on Microsoft’s Skype for mustering panels and interviewing guests, even as they curse it under their breath for its occasional lack of stability and call quality. Skype is ubiquitous because it’s widely cross-platform, relatively easy to install and use, and free—but it may be time for Mac podcasters in particular to pursue more options.
Skype’s Mac user support forum has been abuzz since December with complaints that the ability to adjust conversation volume had been removed since version 7.25. A Skype community manager acknowledged that client and server changes were responsible, and that restoring the functionality would not be easy: “For the speaker volume controls we are still working out how to address this for the scenarios where OSX global speaker volume controls are not the answer.”
This did not amuse podcasters on the forum, because Skype for Mac now consistently outputs “hot” and distorted audio to both headphones and capturing software. “Double-ending,” or recording both sides of a Skype conversation at the source for the producer to sync, is a podcasting best practice. But if a guest is unable to independently record their side of the conversation or has a technical failure, the producer depends on the Skype track for backup. Since Skype for Mac 7.35, that track is likely to sound jarringly worse than the host’s.
The changes in Skype may relate to a new problem I have in putting together my panel podcast, The Audio Guide to Babylon 5, using Skype and one of Rogue Amoeba’s indispensable tools for podcasters, Audio Hijack. Audio Hijack typically and cleverly captures audio from any Mac application. Using the Skype preset, however, as soon as I press the “record” button Skype audio becomes even hotter and largely unusable if my co-hosts have a recording failure.
Audio Hijack’s technical support team researched the issue and responded to me by email (emphasis added):
We’ve been digging further, and it seems that there’s a bug or major change in Skype that’s affecting Audio Hijack’s ability to capture and split up the input and output audio, and we’re looking into ways of improving that behavior. We might suggest using an alternative method of capturing your audio, by disabling the setting to include audio inputs with Skype, and capturing your microphone separately.
That’s what I did. My new Audio Hijack session (pictured) includes two separate audio inputs: a direct link to my USB microphone interface on the left channel and Skype audio output minus my input on the right channel. (The two inputs don’t even have to be combined into the same file; Jason’s preferred Audio Hijack layout sends each audio source into a separate mono file.) The result is that my Skype recordings are still hot but no longer too hot to use in an emergency. 1
The short-term lesson here is that podcasting tools that directly integrate with Skype may be somewhat risky, as Microsoft changes its clients and underlying technology without considering edge cases. On the Mac side, guests can simply record their side of the conversation using QuickTime Player. Producers can record the Skype track and their own microphones separately.
In the long term, however, this serves as a warning to podcasters. Is podcasting support on the Mac so much of an edge case that we need to more thoroughly explore alternatives to Skype? FaceTime is Mac-only. Google Hangouts, which runs as an extension to Chrome, can integrate with Hangouts on Air and YouTube for live video, but it can be a strain on both bandwidth and resources.
Cast seems to be the most promising alternative for traditional podcasting. Even without using its online editing and hosting services, it seamlessly records and syncs native audio from guests. It’s perfectly designed for novice users: just open an emailed link in Chrome, choose your microphone, and go. The host can directly retrieve the individual MP3 files for editing. In my experiments with Cast, however, it seemed unforgiving to guests with spotty internet service or overburdened computer hardware, and Cast doesn’t support more than four participants at one time.
More challenging to many podcasters is the cost: Cast charges a minimum of $10 per month for 10 hours of recording time. For all its headaches—and if you’re confident you’re not going to need to use its audio output—Skype is free. However, as we’ve seen repeatedly in the social media sphere, if you’re not a service’s paying customer your needs are more likely to be less of a priority when technological underpinnings or business models change.
My podcasting community tends to grumble a lot about Skype. Maybe we should take our attention, and even our money, elsewhere.
Plenty of podcasters use raw Skype audio to begin with. While the resulting audio quality isn’t ideal, a guest with a fast, reliable internet connection and a high-quality microphone should sound all right. ↩
By Jason Snell
May 5, 2016 8:14 AM PT
The bulk of the podcasting I do involves me sitting alone in a room talking into a microphone to other people who are somewhere else, doing the same thing. There are lots of advantages to this approach: It lets me host podcasts with people who live all over the world, for one thing, but it also isolates everyone’s sound. We’re all recording in our own little isolation booths, and that can make editing a whole lot easier, since I can clip out the coughing fit or barking dog from your recording and it won’t bleed through from anyone else’s microphone.
Unfortunately, when you’re recording live and in person, the isolation booth is gone, and things get much more complicated. The environment itself can be noisy and challenging, and using more than one microphone at one time can make things complicated. But on the bright side, you won’t need to spend much time editing, because there’s not much point, since even if you clip the sound out of one microphone, it’ll still be audible on the others.
Here’s the set-up I use for remote recording:
The recorder. I recently upgraded to the $400 Zoom H6, which allows me to record up to six XLR microphones at one time (with an additional adapter for the extra two microphones). My previous recorder, the $160 Zoom H4N, is only capable of recording two XLR microphones alongside its own built-in mic, which wasn’t enough for the larger groups I find myself recording live, so I sold it and upgraded. It’s a great value as a starter recorder, and can double as a USB microphone interface when you attach it to a computer. (And yes, if your subjects are willing to snuggle up a little bit, you can record many people with just two or three microphones.)
I choose to use a portable recorder rather than a computer and a USB interface mostly because it’s a much simpler set-up. With a laptop (or iOS device), you need you make sure it’s got power, you need to tote along a second box for the XLR-to-USB interface (and it may need its own power source), and you have to count on your recording software not to let you down. Small portable recorders are self contained, writing their output to a SD card for later import to a computer for editing. They can be powered by AC power or AA batteries that you can find in any store, in a pinch. It’s better this way.
(You may be asking yourself, can I attach two or more USB microphones to a Mac and record that way? I don’t recommend it. I’ve tried it in the past and the microphones generally seem to get out of sync, so when it comes time to put the tracks together later, it gets all echoey and weird.)
The microphones. I have a small collection of XLR microphones. Look at Marco Arment’s review of XLR microphones for details, but if you’re recording live you’re going to need to buy more than one, so price will be a factor.
The best trait of a microphone for live recordings is that it rejects sound that isn’t directly in front of the microphone. If you record with microphones that tend to pick up a lot of room noise, that noise will be magnified and you’ll get a noisy, echoey recording. I have two $150 Shure Beta 58As, but I also have two $20 Pyle PDMIC58s. If you buy the excellent value $60 ATR-2100-USB, you can take advantage of the fact that this USB microphone can also work as an XLR microphone and let it pull double duty.
The accessories.. All the handheld microphones get covered with a $3 windscreen, and I screw their microphone clips onto a cheap fold-up mic stand. You’ll also need to buy XLR cables, and if your microphones are going to be spaced far away from each other and the recorder, you’ll need to make sure that they’re long enough to manage that.
If you need to use microphones in a space where there’s no table or desk, you could have everybody stand and hold the microphones as if they were ready to belt out some classic rock at the top of their lungs. Or you could invest in a few $25 boom stands. I bought one of these and it’s incredibly flexible—I’ve used it to record in all sorts of environments, and because it’s not attached to a table, it isn’t affecting by people doing something noisy like pounding on that table.
The environment. This is a tough one. Record where you can record; if you can avoid super echoey spaces (empty walls, high ceilings, huge glass windows or doors), do so. Recording outside can be surprisingly quiet, unless you’re standing on a crowded sidewalk next to a major road. If you can find a quiet, non-echoey space, you’ve hit the jackpot. But I’ve done some good-sounding outdoor podcasts and some lousy-sounding indoor ones.
By Jason Snell
April 26, 2016 9:24 AM PT
If you’re podcasting or recording voiceovers for video, you need a good microphone. Fortunately, there are good options to be found even if you’re on a tight budget. Unfortunately, there are so many options that it can be dizzying. I reviewed five low-cost USB audio interfaces in a search to find the best of the many options.
The USB/XLR choice
For most podcasters on a budget, the right microphone is almost certainly a USB microphone. They’re easy to use and convenient—just plug it in to your computer and start recording.
I’ve recommended the Blue Microphones Yeti for years after using one myself for several years, and it’s still a great balance of quality and price.
But as Marco Arment points out in his microphone mega-review, there are a lot of other good options. Right now the Audio-Technica ATR-2100-USB (sold in Europe as the Samson Q2U) seems to be the best buy; for a lot less money than the Yeti, you can get a USB microphone that doubles as an XLR microphone for more complex set-ups, with a built-in headphone jack. If you’re usually recording in an echoey room, this noise-killing dynamic microphone is a great choice.
However, there are reasons to choose XLR microphones over USB models. XLR microphones, differentiated by the large three-pinned XLR connector that’s been in use for ages and has plugged into many an analog sound board, come in many shapes and sizes, including some remarkably good-sounding microphones that are available for astonishingly low prices.
Unfortunately, XLR microphones won’t work with a computer or other audio recorder unless you can connect them to an interface that, in turn, connects to your computer via USB. If you’re planning on recording more than one microphone at a time, XLR interfaces are also handy, because you can connect many microphones to an interface box and then record it all on your computer.
They’re also flexible; I can connect my XLR microphones to anyone’s interface box or mixer, and on more than one occasion I’ve been a microphone short and been able to borrow one from a friend. I also own a Zoom H6 recorder that allows me to connect up to six microphones via XLR cables in a portable setting.
There are a lot of uses, but also a lot of parts—but if you take the XLR plunge, you’ll need not only the microphone, but the interface and (of course) XLR cables to connect them all.
By Jason Snell
March 25, 2016 10:00 AM PT
On Monday afternoon I recorded this week’s episode of Upgrade live from Interstate 280, driving home from the Apple event in Cupertino. It was an experiment—I thought it might be fun to do something different for our post-event podcast, and on a day absolutely packed with work, it also allowed me to do something productive with the long drive between Apple and my home north of the Golden Gate Bridge.
I’m pretty happy with the final result, though I wouldn’t recommend recording every episode of your podcast in a moving car. I’m impressed that we only seem to have received one complaint about the danger of podcasting while driving—if you’re opposed to all in-car phone calls, then we’ll just have to disagree—and happy to have heard from numerous people who were entertained by the sound of my turn signals, the beep of the Automatic connected to my car, and the sound of the sudden downpour that happened in the vicinity of San Francisco International Airport.
A few people were wondering what equipment we used to make the podcast, so here’s the scoop:
My microphone was a Sony ECM-77B, which is a small clip-on design that I usually use for recording videos. With it clipped to my shirt, I was able to record without taking my hands off the wheel of my 2005 Honda Civic Hybrid. I attached it to my Zoom H6 portable recorder, which I bought last year. It’s capable of recording six microphones at once, but in this case I was only recording the one.
Myke Hurley and I tried to chat via Skype, but that connection wasn’t stable, so we switched to the telephone. Myke loaded some credits into his Skype account and called my iPhone from Skype, and I kept one earbud in my ear (you can’t cover both ears while driving in California) and talked to Myke during my drive. Listeners to the live stream heard me sound like I was on the telephone, because I was.
Once the drive was over, I ran the file through the automatic dialogue denoiser plug-in in iZotope RX 5, and then sent it off to Myke so he could use it to replace the audio he had recorded of me talking via the telephone. He imported the file into Logic and manually ducked the audio when I wasn’t talking, so the sound would seem consistent—if we cut it off entirely when I wasn’t talking, the change in sound was really distracting. This was a lot of extra work on Myke’s part, but I think it made the end product sound that much better.
I left Apple in the early afternoon, and there was almost no traffic on my return home, so the podcast literally covers every moment I was driving from Cupertino to my house. We wrapped up the podcast with me sitting in my chair at home as I usually do! I leave the calculation of my average driving speed across the trip as an exercise to the listeners.
By Jason Snell
March 24, 2016 2:00 PM PT
Some additional items that came up after I posted my story about Apple’s new Lightning adapter:
As is detailed on the product’s spec sheet, it works with lots and lots of iPads. It also worked fine with my iPhone 6S, even though it’s not listed on Apple’s chart. The product’s marketing seems focused on the iPad Pro—and USB 3 transfer speeds can only be achieved on the 12.9-inch iPad Pro model—but it’s got more broad off-label utility.
Though my story was focused on the iPad as a podcasting platform, after I recorded with my iPhone 6S I realized how amazing it would be to record a full-quality podcast with nothing but a small, high-quality microphone and an iPhone. Talk about portability! Unfortunately, as I mentioned in the story, Apple needs to open up microphone access to multiple apps on iOS before this can work. (My preferred iOS audio editing app, Ferrite, works just fine on iPhone. It’s just cramped.)
Fraser Speirs asked on Twitter if the Camera app on the iPhone or iPad would automatically pick up the audio from an attached USB microphone. This morning, I attached the adapter and the Yeti to my iPhone 6S and took some video in the Camera app, and the sound that was captured came from the Yeti itself. So the answer is yes!
Phil Schiller said you can connect an iPad to Ethernet via a USB-to-Ethernet adapter, so I tried it. Even in Airplane Mode or with Wi-Fi turned off, I was able to connect to the Internet via my Ethernet adapter, so it works! However, I can’t figure out where you can see any evidence that you’re on Ethernet, or any way to adjust networking settings. But it does seem to work.
By Jason Snell
March 23, 2016 4:09 PM PT
While introducing the new 9.7-inch iPad Pro at Monday’s press event, Apple marketing chief Phil Schiller made an aside about a new accessory, the $39 Lightning to USB 3 Camera Adapter:
This is a really powerful accessory, a USB [adapter]. Sure, it lets you plug in your camera, which many of us do, but because it’s powered, you can use a lot of powered USB devices. For example, you can plug in an Ethernet adapter to get on your corporate network. And for those of you who are podcasters, you can plug in a microphone and do your podcast right from an iPad Pro.
I’m a podcaster and an iPad Pro user, so I considered letting out a cheer in the small Town Hall theater, but didn’t want to be the only one. I looked down at iMore’s Rene Ritchie, two rows in front of me, just as he started to clap, and then I joined in. (We did it, everyone, we got podcasting on an iPad to elicit a cheer at an Apple event!)
It’s two days later and I’ve taken delivery of one of these adapters, and have given it a try. The short version is, yes indeed, it works as Apple indicated. But there are also a few quirks to be aware of—and this doesn’t remove all the roadblocks to using an iPad Pro as a dedicated podcasting machine.
Powering microphones and mixers
Though there’s been a USB-Lightning adapter for some time now, the issue with using a USB microphone for podcasting has been all about power. As Schiller indicated, most common USB microphones require more power than the iPad can deliver—and so they just won’t work if you plug them in to the adapter. One workaround people discovered for this was to attach a powered USB hub to the adapter, and then plug a microphone into the hub… but it was a messy solution.
The new adapter solves this problem by getting wider, adding a Lightning port right next to the existing USB port. This means that you can use a USB device while powering your iPad, which wasn’t possible with the old model. (I sometimes stream live podcast audio via an external USB device, but had to be sure that my battery was fully charged before I did that. Similarly, if you want to hook your iPad to your corporate Ethernet network, as Schiller suggests, you’d probably also want to keep your battery topped up while you worked.)
The power that comes to the adapter via Lightning doesn’t just power the iPad—it’s also feeding the USB device you attach to the adapter. When I first tried to attach audio devices to my iPad Pro, I learned an important lesson: If you want to get power out of the adapter, you’ve got to put power into it. When I attached my USB-to-Lightning cable to Apple’s 5 watt USB power adapter—the tiny cube Apple includes with iPhones—I had no success. When I switched to the larger 12-watt brick, though, everything started to work.
I was able to attach both my Blue Yeti microphone and an XLR-based microphone via the Sound Devices USBPre 2 USB mixer to my iPad Pro with no problem. Both showed up as inputs in Ferrite Recording Studio immediately. This all worked on my iPhone 6S, too—same adapter, same microphones, same result.
One funny thing I noticed accidentally is that when I removed the USB end of my Lightning-USB cable from the power adapter and plugged it into my iMac, it didn’t register the iPad as being present—the adapter seems to only use its lightning port as a source of power.
But we’re not there yet
So once the applause from Phil Schiller mentioning iPads and podcasting on stage dies down, where does this leave us? If you’re someone who wants to record a podcast in person using an iOS device and a USB mixer or microphone, you’re set. But most of the podcasts I do are conversations that are conducted over the Internet, usually using Skype. And for the iPad to be a viable device for those kinds of podcasts, Apple needs to update its software.
In short, the audio inputs on iOS need to be accessible by more than one app at a time. Right now I can make a Skype call on my iPad, or I can record my voice to a file on my iPad, but I can’t do both at once—the moment a second app wants access to the microphone, the first one has to give it up. Changing that one behavior in iOS 10 would be enough to allow me to travel and record podcasts without bringing my MacBook Air with me. (I can already edit podcasts on iOS quite well—I edited this week’s Incomparable on my iPad Pro, in fact.)
There’s more Apple could do here, like offer apps access to system audio or the audio output of individual apps, so I could record the sound coming out of Skype, as I do with Call Recorder or Audio Hijack on my Mac today. This seems less likely to happen to me, but I can still dream. (Skype could also adopt Apple’s existing Inter-App audio, allowing other apps to record its output, but this seems even less likely to me.)
(An aside: Yes, you can record remote podcasts entirely on iOS today if you use two devices, such as an iPhone and an iPad. One of them serves as your Skype device while the other one acts as a recorder. It’s really not an ideal situation, especially if you want to hear both your own microphone input and the voices of the people you’re podcasting with.)
It would also be helpful if Apple improved importing files from USB devices and SD cards. Right now iOS is a whiz at importing photo and video files from attached USB devices and cards, but it fails at other file types. I travel with an audio recorder that saves files to an SD card (and also can attach via USB)—but once I record audio there, there’s no way to transfer it to my iPad. It would be great if external media was accessible via standard iOS open and import sheets. Right now, if I want to travel and record something on my fancy six-track USB recorder, I am unable to work with those files on my iPad without the intervention of a Mac.
So there’s more work to do on this front, but this new adapter removes another barrier. Podcasters like me are now one step closer to the dream of doing it all on iOS. I hope Apple eliminates the final roadblock with iOS 10 this fall. Until then, my MacBook Air will be mandatory equipment whenever I’m traveling and podcasting simultaneously.
By Jason Snell
January 11, 2016 5:03 PM PT
On Monday, Rogue Amoeba released Loopback, a $99 (currently on sale for $75) audio utility that dramatically enhances the flexibility of Mac audio. If you’re a podcaster, DJ, or other person who spends time trying to route audio between different Mac apps, you may find Loopback to be an essential tool.
OS X frustratingly doesn’t let you route audio directly from specific apps and input devices to other apps. With Loopback, you can create virtual audio inputs and outputs that appear in the Sound preference pane and in just about any app that works with audio. (It’s a trick that I previously used Ambrosia Software’s WireTap Anywhere tool for, but that app broke in Lion and is no longer being developed. The open-source tool Soundflower does the same thing, although I find its interface confusing and its compatibility and reliability wanting.)
Loopback uses the audio smarts of the makers of Audio Hijack to create an audio utility that’s reliable and offers an interface that’s much more easily understandable. I’ve been using Loopback during its lengthy beta period, and have found it to be an invaluable tool for some very specific audio needs.
Here’s a simple example of how Loopback can be helpful: Even if you’ve got a multi-channel input device attached to your Mac, Skype will only ever use the first channel. With Loopback, you can create a virtual input device that mixes all the channels of your mixer into a single channel. (When I was at Macworld, we had a ridiculous setup where Skype used an iMac’s audio-input jack as its microphone, fed by an output from our mixing device, so that the people on the other end of Skype could hear all four microphones in our studio at one time. Ridiculous.)
Alternately, if you’ve got two USB microphones, you can plug them both in, and create a virtual input that combines them both. Switch to Skype, choose the new virtual interface as your “microphone”, and the app will be none the wiser. (Rogue Amoeba also suggests that screencasters will like Loopback because you can combine apps and input devices exactly as you want them when you’re recording.)
You define what goes where via Loopback’s simple, drag-and-drop interface. You can also create a “pass-thru device,” which serves as both an input and an output, so that you route sound directly from one app to another—for example, from GarageBand to Skype.
One of the frustrations I’ve had for a while is an inability to play audio clips into a Skype conversation. I actually figured out a way to work around this using Audio Hijack 3, but the approach is only functional when Audio Hijack 3 is actively recording my session. With Loopback, I can create a virtual device that combines my microphone and either iTunes or a soundboard app, and use that device as Skype’s “microphone.”
Is this an esoteric audio tool that will only be of value to people who do weird things with Mac audio? Yep. But if you’re one of those people, Loopback is potentially a workflow-shattering experience—and in the best way.
By Jason Snell
December 4, 2015 11:01 AM PT
Recording a podcast with other people over the Internet can be complicated. Everyone needs microphones, sure, but they also need to connect to you so you can hear one another, and for the best audio quality, they need to record their end of the conversation and then send that file to you.
The new web service Cast makes the recording process easy by not requiring that panelists install any special software (beyond Google Chrome—it doesn’t work with Safari yet) or sign up for anything in order to be a part of the conversation. You just send them a link, they open it in Chrome, and they’re up and running. (The service also provides basic in-browser audio editing and podcast hosting, all in the aim of making it easier than ever to get your podcast heard.)
I tried Cast a few times this summer as a part of the service’s beta test, and wasn’t thrilled with the results, but now that the service is officially ready for the world, I gave it a spin this week. Dan Moren and I recorded a short podcast available to Six Colors subscribers using Cast.
I was pretty happy with the sound quality of the conversation, both as we were talking and when we played it back. There weren’t any noticeable artifacts, and the final version on the server sounded good. Cast works by streaming live audio while simultaneously recording your microphone locally and uploading a higher-quality version in the background.
Cast is limited to three guests (plus the host), but large panels are unruly and difficult to edit (take it from me), so I’m not sure it’s a major limitation.
Cast’s recording interface also takes care to add some features that will be quite useful to hosts and panelists alike. A Show Notes button lets hosts write down information about the recording, including when there were issues that will require attention when it’s time to edit the podcast. And the Raise Your Hand button allows a panelist to indicate that they’ve got something to say, which can help smooth out the conversation—I know a lot of podcasters who type the word “hand” into their Skype windows to get the same effect.
Once the recording is done, you can jump into Cast’s editing interface, or—and I like this feature a lot—just walk away with everyone’s files, recorded locally and uploaded invisibly behind the scenes, and pop them into your audio editor of choice. Since the host controls the start and stop of the recording session, the files all start at the same point, which saves you from having to manually synchronize them. Files come down as 128kbps MP3s, which is absolutely acceptable quality for a spoken-audio podcast. (The first time I tried this with the files from my session with Dan, the download failed. I went back later and tried again, and there was no problem.) The show notes are also downloadable as a text file, tagged to the time code of your recording.
Editing in Cast is pretty basic, as you might expect from a browser-based editor. You can edit out chunks of the entire recording, which is useful to make the beginning and end of the show line up perfectly, as well as remove any digressions or mistakes in the middle. You can also adjust the volumes of various tracks, so you can balance out the relative volumes of all your guests. Unfortunately, you can’t trim out noise from a single track, so if someone has a coughing fit while someone else is talking, Cast can’t help you.
You can add new audio layers to the Cast editor, letting you overlay audio (say, sound effects or music) on your session. There’s also a clever “Wedges” feature, which lets you insert audio that pauses your session, plays the audio file, and then continues your session—useful for introductions, ads, and that sort of thing.
Once you’re done, click Mix and Cast with collapse all your audio files into a single mixed-together file. You can choose Standard mix, which leaves your audio alone, or a dynamic-compression mix, which is supposed to smooth out your audio levels. Unfortunately, I found the dynamic-compression mix to be too aggressive—the whole thing sounded overmodulated.
Cast is $10/month (for up to 10 hours of recording time) or $30/month (for 100 hours of recording). I didn’t test Cast’s podcast-hosting feature, but offering unlimited hosting certainly sweetens the deal if you’re currently playing for hosting with a service like Libsyn or Podbean. I published an excerpt of my podcast with Dan to Cast if you’d like to give it a listen and, in the process, test out Cast’s hosting infrastructure.
If you’re a podcast host who has a lot of different guests, non-technical panelists, or panelists who don’t remember to press the recording button or send you their file in time, Cast offers an appealing and simple way to get good quality audio out of guests without asking them to install Skype. If you’re a podcaster or potential podcaster who is frustrated or confused by the Skype-and-local-recording rigamarole, Cast also seems like a service worth trying. And if you don’t want to do more than basic editing, Cast can potentially be a one-stop shop for all your recording, editing, and hosting, which is quite compelling.
Check Cast out for yourself at tryca.st.
By Jason Snell
November 24, 2015 4:46 PM PT
When I wrote about editing a podcast on iOS using the Ferrite Recording Studio app, and then discussed it on The Talk Show, I heard from a bunch of people who wanted to know what I used to record audio on the iPad.
That’s an easy answer—I didn’t—with a more complex issue wrapped inside it. This is a tough one. Even Federico Viticci of MacStories, who uses iOS to do his entire job, still uses a Mac for recording podcasts.
Audio on iOS is primitive when compared to OS X. Only one app can play audio at a time—if you’re playing music and you open YouTube and start playing a video, your music doesn’t keep playing (as would happen on the Mac)—the music is stopped and then YouTube begins to play. And while the Mac’s innate audio-input abilities are not great (thank goodness for utilities like Audio Hijack and Sound Siphon and Call Recorder for Skype), they’re a darn sight better than what’s available on iOS.
As with playing audio, only one app can record audio on iOS at one time. And yet most of the podcasts I create on iOS require that I use a communications app—usually Skype—to talk to the other people on the podcast. The moment Skype begins a call on iOS, it grabs control of the microphone and any other recording app is stopped in its tracks.
There may be some workarounds possible—GarageBand and other apps have been written to use an app called Audiobus to send audio back and forth across apps. It’s a clever hack, but I’m unclear if it could work with Skype (given that it’s sending and receiving call audio all the time, which is more complex than either playing or recording alone), and even so, it would require Skype to be updated to support the feature. (Skype could, of course, offer a feature that let you record your own microphone locally, or offer a recording of your call in the cloud, but Microsoft seems uninterested in pursuing such features.)
So the best hope here is that iOS gets an update at some point that allows multiple apps to have access to audio input. Every year I hope it’s one of those little features that Apple displays on a slide at WWDC that says, “100+ other great features!” or somesuch. It’s never been there.
In the meantime, there is a way to make a Skype call and also record on a high-quality microphone using only iOS. It’s just kind of ridiculous: You make the Skype call on your iPhone, presumably with iPhone earbuds or other compatible headphones with a microphone, while sitting in front of an iPad that’s attached to a microphone and recording locally. The people on Skype hear your bad microphone, but your good microphone is what gets used on the actual podcast. Serenity Caldwell used this method for both this week’s Incomparable Radio Theater and Upgrade episodes. The risk is that if your recording fails, all that remains is a lousy recording of your voice on a set of earbuds via Skype—not a great backup.
I’ve got a Zoom H6 recorder, so if I wanted to travel with just iOS devices, I think I would just record my microphone locally using that, then transfer the file for editing. That also allows me to bypass another problem with recording on an iPad or iPhone: support for external microphones.
There are a few microphones and mixers out there with a native Lightning connector, but most USB devices that rely on Apple’s Lightning to USB Camera Adapter. Unfortunately, the Lightning connector is limited in the amount of power that it can supply; most USB devices won’t work with it unless you connect the microphone via a powered USB hub. Things get messy quickly. It’s workable—I discovered that even my Sound Devices USBPre2 audio interface can work with the iPad if you bring a powered USB hub and put it in a special compatibility mode—but it’s not ideal.
That’s the longer answer. The short answer is, recording podcasts on iOS today is not as easy as editing them. It can be done, but only with a number of workarounds that aren’t necessary on the Mac, which has a more mature sound system that can handle playing and recording multiple audio streams in multiple apps simultaneously.
Ah, well. Maybe in iOS 10.
By Jason Snell
November 13, 2015 5:04 PM PT
Like a lot of iPad users, I dream of traveling with just the iPad, and no laptop. I’m not sure what it saves me, really—my 11-inch MacBook Air is about as small as they come. But still, it’s a dream.
What gets in the way of it, for me: podcasting. iOS has come a long way in terms of power and functionality, but when it comes to audio there have always been lots of issues. iOS basically doesn’t allow two apps to use the microphone simultaneously, and Skype for iOS doesn’t support built-in recording or a pass-through technology like Audiobus, so if you want to talk on Skype while also recording your microphone’s input, you either need to use two devices or a Mac.1
Using an iPad to do the kind of multi-track podcasting editing I do in Logic on my Mac has been possible for quite a while. Auria is the app I’ve liked the most for this sort of thing, but its interface always struck me as ungainly. I could edit a podcast in that app, but it was slow, and not very much fun.
This week writer/podcaster Fraser Speirs mentioned a new podcast editor he liked, Wooji Juice’s Ferrite Recording Studio. I had been looking for a project to take on in order to test out the iPad Pro, so I took Ferrite for a spin.
In a word, wow: This is the iOS multitrack editor that I’ve been waiting for. Ferrite has all the features that have made my podcast editing workflow so efficient: Strip Silence, compression, noise gate, ripple delete, quick selection of all following clips. It’s all there. And it’s all built inside an attractive interface that’s a pleasure to use. It’s like Ferrite read my mind.
Only later did I realize that Ferrite did, in a way, read my mind. Canis, the lead developer of Ferrite, has listened to my podcasts and read my articles about podcast editing, and apparently some of that rubbed off on the product? During development, he asked me to send him some of my sample podcast files so that he could test using real-world examples, and I sent him a zipped folder full of the raw files that I use to edit The Incomparable. I just hadn’t connected the dots.
Like Logic, Ferrite will break long podcast tracks into short blocks by removing the silence between noisy passages; just select a track and choose the Strip Silence command from a pop-over menu, then specify a couple of settings. It’s got a built-in compressor and noise gate (able to be turned on via an in-app purchase), to level out volume. Trimming individual blocks of sound is as easy as tapping and sliding a finger left or right. And when I want to pull everything in the project forward or backward in time, I just tap on a clip, then triple-tap to select all of the following clips.
Ferrite works much better for me with a keyboard than without, mostly because I spend an awful lot of time pressing the space bar to toggle playback on and off. There’s a play/pause icon on the interface, of course, but it’s way down in the bottom right corner, which is not a convenient location, especially on the enormous iPad Pro screen. I also needed to use the keyboard to rapidly delete clips that were full of stray noise, because Ferrite’s touch-based multiple-clip selection feature is a little bit finicky.
Still, the fact is that my temporary can-I-do-this experiment with Ferrite iPad Pro never reached the stage where I bailed out and decided that I couldn’t do it. A couple of hours later (par for the course for these things), I had an entire finished episode of The Incomparable ready to go2. (I did have to export the final file back to my Mac to re-encode it as an MP3; Ferrite currently only lets you output projects as AAC files.)
Will I edit next week’s episode on an iPad? Probably not, but that’s more a function of the tools that surround my editing experience (MP3 taggers and encoders, track-sync utilities, and the like) than the core editing experience itself. But for the first time I can see myself traveling with just an iPad and using it to edit podcasts wherever I go. (But if I need to record a podcast on the road, I’ll need to record on my iPad while I’m talking on Skype using my iPhone…)
One final note: I did this all on an iPad Pro, but Ferrite works on other iPad models, and even iPhones. So even if I don’t end up sticking with the iPad Pro, I suspect that I’d have no problem editing a podcast on my iPad Air 2.
Ferrite is free to download from the App Store, with its more advanced features accessible via two $10 in-app purchases. If you’re a podcast editor who dreams of using an iPad to do the job, I highly recommend you give it a try.
By Jason Snell
October 9, 2015 1:51 PM PT
Today Marco Arment released Overcast 2, a free update to his iOS podcast app. There are a lot of great iOS podcast apps out there, but Overcast remains my favorite, thanks to its excellent Smart Speed and Voice Boost features, as well as its flawless speed-boosting features.
Speaking of those features, in previous versions of Overcast they were unlocked when you made an in-app purchase. Beginning with Overcast 2, they’re free. The entire app is free, in fact, with Marco going to a patronage model—he requests donations if you use and like Overcast, to help support its continued development.
It’s an interesting move, but Marco was right to be concerned that the 80 percent of his users who didn’t pay weren’t seeing his app’s most notable features. Now everyone can use those features—and if a small percentage of Overcast users figure that it’s worth paying to thank Marco for his work, it should all work out.
That’s the End of That Chapter
An inside joke in the tech podcasting community has been that, for quite some time now, there have been some vocal podcast listeners who will strongly and repeatedly suggest that real podcasts embed chapter marks. It’s not fair to say that people are almost always German—sometimes they’re Austrian or Swiss.
For a long time I made AAC versions of my podcasts specifically to create chapter marks using GarageBand. But years ago, I gave up and went to MP3 versions only. However, it turns out that the MP3 format does support chapter marks too—it’s just never been supported in most podcast-creation tools or podcast-playing clients1.
Today, with the release of Overcast 2, the number of people who can take advantage of podcast chapter marks has skyrocketed. If you’re a podcaster wondering how you can add chapter marks to your podcast, your options are limited right now.
In fact, right now I know of only one, and it’s what I’ve been using for Clockwise for the last couple of years: the web app Auphonic. Auphonic is an audio processing tool—you upload your file and then set it to encode it, add chapter marks, provide leveling and filtering, and even automatically upload it to your host. You can process two hours of content per month for free, and there’s a sliding scale of what you need to pay for more processing time.
Auphonic also sells a Mac app called Auphonic Leveler Batch Processor, which does all the leveling and filtering, but unfortunately doesn’t (yet?) support adding MP3 chapter marks.
So for now, if you’re a podcaster and you want to experiment with chapter marks, I’d recommend that you check out Auphonic. But it’s hard to believe that someone won’t build a tool—even a quick and dirty one—to make this something you can do right on your Mac2.
By Jason Snell
August 12, 2015 2:54 PM PT
I used to edit podcasts in GarageBand, but switched a few years ago to Apple’s $200 Logic Pro. I don’t use most of Logic’s high-end audio production features, but it’s got a few features that make it much better than GarageBand for my purposes.
However, GarageBand is perfectly suitable for podcast editing, and don’t let anyone tell you different. Every Mac comes with GarageBand, meaning every Mac user has access to a free multitrack audio editor capable of generating high-quality podcasts. And while it’s true that the latest version of GarageBand (version 10) lacks some of the podcast-specific features of GarageBand 6.0.5 and earlier, it’s not true that you can’t edit a podcast in the current version of GarageBand. You can! (Earlier on Six Colors I wrote about editing podcasts in more depth.)
GarageBand 10, in fact, based on the same core set of features as Logic, which means you can take advantage of some plug-ins to make your podcasts sound much better—if you can figure out how to use those features. GarageBand doesn’t make it easy. Let me give you a tour of where these features are and make you some suggestions about how you can use them to make a better podcast in GarageBand 10.
By Jason Snell
July 17, 2015 2:18 PM PT
This month Apple’s celebrating “10 Years of Podcasts”, meaning that it’s been a decade since Apple introduced podcasting features into GarageBand and iTunes and added a podcast directory to the iTunes store.
Of course, podcasts have been around for more than 10 years. I remember Shawn King broadcasting radio on the Internet in 1994, and several other Apple-themed podcasts date from the early 2000s.1 Leo Laporte founded TWiT in 2005, though in a fit of pique about Apple making noises about owning the word podcast, he re-dubbed them netcasts and you still hear that word on TWiT’s promos today.
Prompted by Rene Ritchie, I looked up the first podcast I actually hosted. It was probably Macworld Podcast 27, February 8, 2006, live from a cruise ship in the Pacific Ocean—though I more vividly remember the very next episode, which featured Leo Laporte and was largely conducted in a shipboard bar. As Leo and I talked, more geek cruisers stopped to watch. By the end of the chat, Leo and I had gathered a studio audience, which applauded when we concluded. It was awesome.
The first podcast of my own was the original TeeVee podcast, in July of 2006. It was sporadic and didn’t last very long. I didn’t resume podcasting independently from my job until August of 2010, when The Incomparable debuted. Hard to believe it’s been nearly five years, until I look at the calendar and see that I’ve got to prep episode 256 for posting tomorrow.
These days I host or co-host four weekly podcasts and produce several more.2 Thanks to the rise of podcast sponsorships (and my departure from my old job), I can say that I’m not just a writer and editor who podcasts on the side—I’m also a professional podcaster.
That’s weird, but it’s good. I love to listen to podcasts and I love to make them. It’s good to be doing something you love. If podcasting couldn’t help me make a living, well, I’d still be doing it. (Just probably not quite as much of it!)
Upgrade (Mondays), Clockwise (Wednesdays), TV Talk Machine (Fridays), and The Incomparable (Saturdays) are my four weekly podcasts. I also do Total Party Kill fortnightly, TeeVee weekly during “Doctor Who” and “Game of Thrones” seasons, Robot or Not irregularly, and parts of the Incomparable Game Show. ↩
By Jason Snell
June 24, 2015 12:21 PM PT
One of the reasons I promote Call Recorder as a tool for Mac podcasters is that it records what you hear on Skype. Whatever microphone is selected as an input in Skype, that’s the one Call Recorder records. So if I can hear you, and you sound good, and you’re using Call Recorder, you’re going to give me a recording of your microphone that sounds good.
When people don’t use Call Recorder, I often discover that while they sounded great on Skype—their fancy high-quality external microphone was selected as the input in Skype’s Audio/Video settings—they were accidentally recording their conversation using their computer’s built-in microphone.
It’s very sad. It means I have to choose with a local recording of a bad microphone or a Skype recording of a good microphone. The Skype recording is generally of pretty good quality, though I prefer a local recording because it doesn’t ever get weird Skype sound artifacts (common when someone has a dodgy Internet connection) and it’s an isolated version of the one person’s voice. A recording of a Skype conversation contains everyone in the conversation, and when they all talk at once there’s nothing you can do to pick them apart.
Anyway, this scenario happened this week. One of my guests accidentally recorded using their computer microphone rather than the good microphone we heard on Skype. So I was going to have to use the Skype recording, but I had local recordings of the other guests.
This is doable, and in fact what I have to do when someone’s local recording utterly fails. (The most recent episodes of Total Party Kill feature a recording failure, so when one person talks I have to delete everyone else’s voices and use the everyone-on-Skype track instead.)
But in this case, I did have a track from the person. It did record a voice, just not one at a quality I could use. To save the day (and my time), I cheated. Here’s what I did.
First, I had to trim the local recording so that it synced perfectly with my Skype reference track. Then I dropped both tracks into Logic and synced all the other local audio files with them, using the Skype track as a reference.
I use Logic’s Strip Silence feature to make noisy areas in a track visible, and remove all areas of a track that contain silence. Once I run the Strip Silence command, only areas containing noise remain on any given track.
In this case, I could use Strip Silence to my advantage. I ran Strip Silence on the local recording of the computer microphone, meaning that Logic was only using that track at times when that panelists was speaking. It was, essentially, a map of when that person talked and when they were silent.
If only I could use that set of Strip Silence-created audio blocks as a sort of audio mask (forgive me, that’s my Photoshop creeping in)? After all, when the panelist is taking, it’s going to be (mostly) just them talking in the Skype track, too.
So that’s what I did. I quit Logic, opened both the local recording and the Skype reference track in Sound Studio, copied the Skype reference track, and pasted it right over the local computer-microphone recording, replacing it entirely. Then I saved the file and quit Sound Studio.
When I opened Logic back up, it did yell at me—it looks like this file has changed!—but then continued on its way. In the place of the old local audio was now the audio from the Skype reference track, but only the moments when my panelist was talking.
At that point, I still had some work to do—stripping out coughs and microphone clicks that weren’t actual talking, removing other audio tracks when there truly was cross-talk, and the like—but it was clean-up work. And much less work than having to manually cut in the Skype track (and cut out all the other tracks) every time the panelist with the bad recording spoke.
By Jason Snell
June 19, 2015 10:53 AM PT
Here’s some podcast/audio nerdery that won’t be of interest to most people, but it’s saved my bacon more than once and just this morning it appears to have saved the bacon of a fellow podcaster, so here goes.
I broadcast my podcasts live using Nicecast, a $59 utility from Rogue Amoeba. One of Nicecast’s, er, nice features is that it’ll also optionally save an archive of your broadcast locally. I’ve enabled this feature, mostly just in case the recording software I usually use—Call Recorder—fails.
There are a lot of reasons I use Call Recorder, most specifically that it records whatever microphone is selected as an input in Skype, so if you sound okay to your fellow podcast participants, your recording will sound okay too. You won’t believe how many times I’ve had it happen that someone has sounded great on Skype, only to send me a local recording of themselves that was made not with their fancy USB microphone, but with the lousy microphone embedded in their laptop or with the (somewhat less lousy) microphone on their earbuds.
But a failure in Call Recorder can be catastrophic. Call Recorder saves its files as QuickTime movies, and if the program doesn’t finish saving that file—say, there’s a crash or a power failure—the entire thing is unsalvageable. So it’s good to have a backup, if not more than one.
Anyway, I had a recording failure a few weeks ago and turned to my Nicecast backup. When I opened the file, I discovered something curious: The file was a stereo recording with both my voice and voices on Skype on the left side, but only the voices of my panel on the right side. This probably happened because I’m using a stereo USB audio interface but only a single microphone.
So I had a thought. Having an isolated audio track of my own voice would improve the quality of the recording and reduce the amount of time I’d spend editing the podcast. Could I somehow subtract the content of the right side from the left, leaving me with a recording of just my own voice?
The answer turned out to be yes. I used my basic audio touch-up tool of choice, Sound Studio, to recover my microphone audio and save the day. First, I copied out each side of the stereo track into their own individual mono files. Then I selected the entire contents of the right track (the one containing just my panelists’ voices) and chose Audio: Invert Signal Polarity.
Go back to high school physics for a second. A wave can be cancelled out by an identical, but inverted wave. This works in the ocean (where two waves can interact and end up cancelling each other out) and it works in sound, too. It’s also a principle used in noise-cancelling headphones.
Anyway, once I inverted the polarity of the panelist-only signal, I copied the result and switched to the window containing the audio of my voice and the panelists together. Using Sound Studio’s Mix Paste command, I pasted the inverted sound over top of the original. And, much to my surprise, it actually worked! The mix paste had subtracted the other voices from the file, resulting in a track that contained only my voice.
Though the quality of the Nicecast archive wasn’t as high as my Call Recorder file (because I was using lower quality settings for the backup), it was still pretty good. I used the track in an episode of The Incomparable and I’m pretty sure nobody noticed a thing.
Like I said, I’m not sure how often this sort of thing comes up in the real world. But if you ever run into this problem, I hope you’ll remember this story and try this approach. It might save your bacon like it saved mine.
By Glenn Fleishman
March 16, 2015 7:00 AM PT
Back in the depths of time, newsletters were a big business. Printed inexpensively in small quantities for investors or people in specialized industries, like lumber or printing, subscriptions could run hundreds to thousands of dollars a year. A few hundred subscribers made them profitable; a few thousand, lucrative, allowing for staff writers and researchers.
These newsletters had timely information that wasn’t found in daily newspapers or weekly magazines. There was no cable news network, and radio was local with generally directed nationally syndicated programs.
Even when cable TV started to add channels, news radio proliferated, and early dial-up services added news features, focused information important for someone’s profession was difficult to find. Newsletters were exceedingly lucrative and appreciated. Some newsletters included or offered cassette tapes — proto-podcasts! — or omitted the paper part entirely and were just audio tapes. I knew of one on desktop publishing that was mass-faxed, and if I recall right, cost $495 per year for weekly dispatches. (MacPrePress, produced by the late Kathleen Tinkel and Steve Hannaford, for those with long memories.)
The Internet’s emergence derailed a lot of these newsletters, because scarce data became easily available, and specialized information sites rose quickly. Often some data previously acquired with scarcity justifying its high expense was suddenly or within a few years available freely, ubiquitously, and instantaneously. In some cases, the dollars shifted: the money paid on postal-dispatched paper shifted to online subscriptions to Web sites, email newsletters, and access to databases. The excellent credit-card industry newsletter, the Nilson Report, now delivers 23 issues in PDF form and snail mail each year for $1,495, for instance, plus the past fives years as an electronic archive. In others, they evaporated entirely.
Blogs were certainly part of the reason. Once easy-to-use blogging software appeared in the early 2000s, a hundred million blogs bloomed, and a tiny portion were dedicated to reporting and analysis, including my own Wi-Fi Networking News (WNN) blog. In a previous era, WNN would have been a newsletter that, based on the interest I saw on the site, would have grossed hundreds of thousands of dollars a year. (The blog brought in $30,000 to $40,000 a year in the mid 2000s.)
Many of the most prolific and focused bloggers who covered tech and finance turned those blogs into businesses, were acquired by larger media companies, or were hired by publications to write for them and often start in-house blogs.
By Jason Snell
February 2, 2015 7:18 AM PT
Once you’ve recorded your podcast, it’s time to edit. Editing can be incredibly simple—trim the beginning and end point and be done with it—or as complicated as you want to make it. I use a few different editing approaches based on my tools and the needs of the particular shows I do. Let me describe them to you now…
By Jason Snell
January 14, 2015 1:16 PM PT
I do a lot of podcasting. And I am often asked about what tools I use and how I produce my podcasts. So in a series of articles on this site, I hope to detail my approach to making podcasts. What I don’t intend is to suggest that this is the only way to make podcasts—it’s just the way that I make them. If I can provide some sort of inspiration—or even a cautionary example of what not to do—I’m glad to do so.
While I think it’s true that many people underestimate how much work goes into making a podcast, I also get the sense that other people overestimate the time I spend. And depending on what kind of a podcast you’re creating, the amount of time required to put it together can vary widely. The average episode of The Incomparable probably takes three or four hours to edit; the average TV Talk Machine I can turn around in 10 minutes.
By Jason Snell
November 28, 2014 11:37 AM PT
I make podcasts as part of my job now, but despite my year spinning records at my high-school radio station, I don’t have much of a background in audio. Like many podcasters, I’ve learned as I’ve gone along, and I’ve upgraded my hardware and software along the way.
I’m frequently asked for product recommendations for podcasting, and while I can’t claim to have tried every USB microphone out there, I have tried many of them and heard the recording results of many more. I’ve also talked to audio experts, sometimes even voluntarily.
Last night I had a couple of exchanges on Twitter that really irked me. I mentioned that the Blue Yeti, the microphone that I use, was on sale at Amazon. (That sale has since ended.) It seems like every time I mention the Yeti on Twitter, I’m immediately sea-lioned by an audio expert who wants to point out that the Yeti is not suitable for professional use.
Point one: I wasn’t recommending it to professionals, I was recommending it to podcasters who are not pros, the ones using headsets and Blue Snowballs and Apple EarPods. Point two: It’s the microphone I’ve used for the last two years, so I think maybe calling it unfit for professional use is not only insulting to me, but wrong on its face.
Anyway, the great thing about podcasting is that anyone can do it. You don’t need to have access to a broadcasting company’s radio transmitter and studios packed with equipment. You can reach people with your voice right now. Yes, these days there are a lot of big names (often from those big broadcasting companies) doing podcasts, but there’s also an incredible diversity of voices and subjects.
If you’re just starting out, don’t allow yourself to be intimidated by all this audio talk. If you have something to say, say it.
I don’t deny that I’ve heard some pretty awful sounding podcasts in my day. Audio quality does matter. I’d just argue that beyond a certain point, it only matters to audio snobs. My favorite podcast, The Flop House, often has some severe audio problems—but it doesn’t matter, because the content is great.
So start with the equipment you’ve got. You could literally do a podcast by talking into your iPhone and posting it. (I don’t recommend it, but you could do it.) Every Apple laptop comes with a built-in microphone. Again, I don’t recommend you use that microphone, but you could. You could use the EarPods that come with your iPhone—and I’d recommend them over that laptop microphone any day. Add an external microphone when you get the chance. Learn how to use GarageBand or Audacity to edit your podcast—both of them are free.
Beyond that, here’s a tiny bit about hardware.
By Jason Snell
November 24, 2014 9:20 AM PT
Today Moisés Chiullan announced that Brett Terpstra’s Systematic podcast and Christina Warren and Brett Terpstra’s podcast Overtired are moving from 5by5 to ESN.fm.
There’s been a lot of podcast movement lately, which isn’t really surprising given how young this medium (or whatever) is. Not everyone finds podcast networks valuable, but they can helpfully group shows of similar sensibilities together, provide exposure for new shows that might otherwise be missed, and offer a technical or financial infrastructure that can be convenient for people who have something to say but don’t want to build a podcasting business1.
And sometimes after a while, those hosts or shows are ready to spread their wings, creatively or technically. Plenty of talented hosts have left 5by5, but you know what? My pals Merlin Mann and Andy Ihnatko are still there, and the indefatigable Dan Benjamin’s producing new audio and video shows all the time.
Since we moved Clockwise from IDG (with the blessing of some nice folks in IDG management) to Stephen Hackett and Myke Hurley’s new Relay FM network, the audience of that show has more than doubled. Being on Relay helped expose the show to a great audience of tech-podcast listeners, and has also helped us grow Upgrade rapidly.
I should mention that as of the most recent episode of The Incomparable, I’m no longer posting episodes to the 5by5 network. We started the show in 2010 and quickly Dan started recruiting me. A little more than a year later, we joined 5by5, and it helped expose my odd little pop-culture show to a much wider audience2.
As time wore on, I decided I wanted to build something on my own, and launched spin-off shows on The Incomparable Network. That project also allowed me to add show metadata that 5by5 simply couldn’t or wouldn’t offer, like a page of all our Star Wars episodes or an index of show topics.
At that point the clock was ticking. I began posting the show to both networks. After a communication failure at 5by5 forced me to abandon a live episode just as it was starting, we set up our own live-stream system that we could control. And most recently, I gave Dan notice that we were changing ad-sales teams. The relationship was at an end. It was time to make it official.
I’m a believer in the medium—it’s one of the ways I expect to support myself and my family now that I’m on my own. But these are the early days. Things are changing rapidly. There are always new podcasts and new networks. (And yes, it’s worth reminding ourselves that this is not the only new-media opportunity out there.)
This reminds me of nothing more than the early days of the web. The younger people out there might not remember, but that period was like the wild west. Things changed every day. Podcasting’s going through something similar.
Anyway, thanks to everyone out there who has listened to some of my podcasts. And best of luck to Brett and Christina on their new adventures with Moisés at ESN.
John Gruber, Marco Arment, John Siracusa, and Merlin Mann were unlikely to have devoted the time to podcasting when they started—but Dan Benjamin offered technical expertise and an ad-sales infrastructure, as well as being an excellent conversational foil.↩
Nothing really changed with the production of the show when we moved—I’ve produced and edited almost every episode, and Dan never had any input into the content.↩
By Jason Snell
November 17, 2014 10:11 AM PT
There’s a lot of talk about podcasting these days, mostly because big names from public radio are doing interesting new things with the medium, and people who write for major media outlets tend to listen to public radio. All of a sudden, thanks to the imprimatur of big media, podcasting is apparently back. Even though all the tech geeks have been listening to podcasts for years now, and it’s been growing as a medium all this time.
Still, as a huge fan of the medium (you may have noticed), I’m happy that more attention is being paid to it. A rising tide lifts all boats—and this stamp of approval from mainstream media will reach future podcast listeners and future podcast advertisers alike. It’s a good thing.
Media outlets aren’t the only ones suddenly paying attention to podcasting. Today Ingrid Lunden at TechCrunch reports that Spotify’s app includes hidden references to podcasting features. This follows the purchase of podcast service Stitcher by Spotify competitor Deezer last month.
More importantly for Spotify, Deezer gave me smart explanation of why podcasting was interesting: Deezer is making a big move to do more with in-car services, and podcasts and talk radio are especially popular in that setting. It could be that Spotify, which also has a number of connected car integrations in place, is thinking along the same lines.
Podcasts are replacing the radio for tech savvy car commuters, and once less savvy commuters are exposed to podcasting I suspect they’ll do the same. I’m not entirely convinced that Spotify is the best vehicle for this, but someone’s going to crack it. As Marco Arment wrote yesterday, it may take some time:
Smartphone podcast apps and Bluetooth audio in cars have both helped substantially, but both have also been slow, steady progressions that are nowhere near complete. No smartphone app has caused a massive number of new listeners to suddenly flood to podcasts, and people don’t upgrade their cars frequently enough for any automotive media features to cause market booms. A lot of people still listen to podcasts in iTunes, and a lot of cars still don’t have Bluetooth audio. We’ll get there, but it takes a while.
If one of the biggest concentrations of podcast listenership is in the car, then the difficulty of connecting podcasts to cars becomes the biggest barrier to the success of the medium. Car tech has traditionally been terrible, thanks to the weird dance between automakers and their equipment suppliers—but that’s starting to change, mostly thanks to Google and Apple. The new Android Auto and CarPlay features allow most new smartphones to project a simplified version of their interfaces onto the screens of compatible car-entertainment devices.
Yes, as Marco points out, this will take years to trickle down to most cars, but it will. It makes too much sense to let the likes of Google and Apple drive these entertainment systems with the much better hardware and software that’s in the pocket of almost every driver.
While I think there’s a huge opportunity to bring the podcast medium to a broader collection of listeners—if I were to do a tech startup, it would probably be something related to this—I’m not convinced that the Spotifys of the world are the right companies to do it. Spotify’s brand is about music, not talk. It’s also unclear what Spotify’s terms would be, and as someone who thinks Stitcher’s terms are really crappy, that’s a serious concern.
No, the company that could do the most to make podcasting a success is Apple. Apple’s got the biggest directory of podcasts on the planet at iTunes and the two most popular podcast-listening apps (Podcasts and iTunes). In the mid-2000s, Apple tried to make podcasting the next big thing, and the world wasn’t ready. Apple’s commitment to podcasting dramatically receded after that—remember when GarageBand was for podcasting?—but with iOS 8 it added Podcasts as a default app, so maybe the tide is turning.
It’s great that podcasting is having a moment in the spotlight. Maybe this is the right time for Apple and other tech companies to forget about the false-start of 2005 and bring this amazing medium to the masses. I’m pretty sure they’re going to love it.
[Hat tip to Federico, Stephen, and Casey.]
By Jason Snell
September 29, 2014 8:08 AM PT
[This is probably the first in a series of posts about nerdy podcast things. Apologies to everyone who’s not a podcaster. Are there people left who aren’t yet hosting their own podcasts? Your time will come…]
At WWDC this year, I hosted a bunch of podcasters in IDG’s podcast studio. (You can drive up to Mill Valley and use my garage next year, folks.) During the recording of Accidental Tech Podcast, I noticed something interesting: Marco Arment was streaming his show live from his iPad.
As someone who streams his own podcasts live, I was intrigued by Marco’s setup. And while Marco uses this particular setup when he’s on the road (he has a mixing board when he’s at home), for the past few months I’ve been using the same setup to stream The Incomparable. From an iPad mini. (I usually use Nicecast from Rogue Amoeba, but various aspects of my Mac’s audio system began behaving strangely when I started using the Yosemite betas.)
In fact, one of the great advantages to this approach is that you don’t have to deal with the Mac’s finicky sound system, which should be much better than it is. (I’d like to be able to, for example, route a couple of USB microphones and the audio from a couple of Mac apps into a virtual input that gets sent out over Skype. There was some great software that used to do this, but most of it died when Lion was released, believe it or not.) Some new software is slowly starting to appear that fills in the gaps, but the beauty of using an iOS device to stream audio is that your Mac doesn’t have to worry about any of that—all it has to do is play sound, which it’s doing already.
The centerpiece of what I’ve taken to calling the Marco Method is the Behringer UCA202, a $30 USB audio interface. Combine that with Apple’s Lightning to USB Camera Adapter, plug into your iOS device, and you’ve got the start of something. (Yes, iOS devices supply enough power to the UCA202 to keep it running, which is not the case with many USB-based audio interfaces.)
Next up is an RCA-to-minijack cable. The RCA inputs plug into the UCA202, and the minjack goes where I would normally plug my headphones—when I’m podcasting, that’s my Blue Yeti USB microphone. The UCA202 has its own headphone jack and volume plug, so I plug my headphones in there and can ride the volume wheel to get the right volume for my ears, separate from the right volume for the live stream.
That’s the hardware side. On the software side, Marco discovered a $5 app by Anthony Myatt called iCast Pro. It’s not much to look at, and it’s an iPhone app so it runs in blown-up mode on an iPad, but it connects directly to an Icecast server, which is what both of us use to stream live. The Icecast server then relays the audio stream to anyone who wants to tune in.
This approach doesn’t provide any way to charge the battery of the iOS device you’re using to stream, but my fully charged iPad mini could probably stream for five hours before running out of juice. I haven’t yet had the chance to test out this setup in the field, but it really allows you to stream live from just about anywhere. Thanks for the tip, Marco.