Vocal app uses your iPhone 4S to control your Mac

The trite thing to do when writing about any software that can handle converting your speech into text is to do so using that software. I’m nothing if not trite, so that’s precisely what I’m trying to do here, with Vocal. Vocal is a new app from developer Matthew Roberts that leverages the power of voice transcription on your iPhone 4S to control your Mac. Vocal can take dictation and send the transcription to your Mac, and also perform a variety of other actions based on your voice commands.

For Vocal to work its magic, you need to install a free companion app on your Mac. Then, of course, you also need to pick up the $2 app from the App Store. Run the Mac app, and then launch the Vocal app on your iPhone. The iPhone needs to run on the same Wi-Fi network that your Mac is using. In theory, the app should list the name of your connected Mac; the app and Mac see each other via Bonjour. In my own testing, however, I needed to force restart Vocal on my iPhone by quitting the app, double tapping the Home button to bring up the multitasking bar, holding down on the app’s icon there, and then tapping the minus sign. Restarting the app after doing so allowed my Mac and the app to see each other.

At that point, I tapped on my Mac’s name within the app, and was then prompted to enter the passcode that Vocal displayed on my Mac’s screen. Once that was done, Vocal was ready to listen—and act.

Because it uses the systemwide dictation built into the iPhone 4S, Vocal doesn’t use Siri’s normal timeout. Siri cuts you off automatically as you dictate emails or texts if you pause for too long; the dictation option (triggered by tapping the microphone on the virtual keyboard) listens for much longer. That’s quite beneficial within Vocal, since it gives you more time to gather your thoughts as you compose sentences. (After a while, Vocal still does stop listening, but I believe that happens only at some mandatory timeout implemented systemwide when using the iPhone 4S’s dictation option, based on data or memory usage.)

Vocal puts the virtual keyboard on screen, even though you likely won’t need to type into the app. Instead, you just need access to the microphone key. Tap that and start speaking; tap the Done button when you’re finished. Vocal then acts upon your spoken instructions immediately; you don’t need an extra tap to submit your text.

That autosubmission when you're finished speaking makes the process feel measurably faster. As soon as I finish speaking these sentences, I’ll tap the Done button, and Vocal will immediately paste this text into my text editor. (If the cursor isn’t positioned within a text entry field or document, Vocal still ensures that the transcribed text is copied to your Mac’s clipboard so that you can paste it manually.) Most of the time, that is. Sometimes, for reasons that I can’t quite understand, I still have to push the Send button manually within the Vocal app.

Now, handling transcription is only a small part of what Vocal’s claimed feature set is. The app is also meant to allow you to do things like control iTunes; send emails and tweets; look up definitions; select, copy, and paste text; search Amazon and Google; print; and create new documents. Some of those actions work brilliantly—when I said “Tweet the people at the Apple Store are generally very nice,” Vocal successfully opened a New Tweet window within the official Twitter client and pasted in my text.

Other controls are less full-featured. The iTunes controls, for example, require a bit more stilted speech than Siri can handle. I said, “Play ‘Artificial Heart,’” and Vocal simply started playing iTunes from its current song, ignoring my specific request. When I tried “Play the song ‘Artificial Heart,’” Vocal reported that it “couldn’t find a song in iTunes titled ‘Play the song artificial heart.’” “Play song ‘Artificial Heart’” felt more mechanical, but got the job done.

By default, Vocal attempts to automatically determine whether you’re speaking text to be transcribed, or instead sending instructions to your Mac. In practice, it works well—unless you try to start sentences with words like “tweet” or “pause.” That’s easy enough to work around, since you can turn the feature off if it’s not working right for you at the time.

Other instructions that worked great were vocal directions like “Define pugnacious,” which launched the Dictionary app to the right word; “Search Amazon for The City and The City”; or “Search Google for ‘chocolate baskets’.” Some actions, on the other hand, seemed hard to justify; I can’t imagine “Print this page” or “Open a new document” could ever save much time if you’re close enough to your Mac to see its screen.

If you simply tell Vocal, “Search for Great American Novel,” it’ll attempt to perform a Spotlight search. On my Mac, though, I’ve given Launchbar the Command-Space keyboard shortcut, and apparently that affects how Vocal works behind the scenes; the utility searched with Launchbar instead of Spotlight.

Vocal sorely needs a Siri-like info button that shows you all the commands it can handle. Right now, the Support tab at the developer’s Website is the easiest place to find available instructions.

In short, Vocal is definitely cool. You’ll be hard-pressed to find a cheaper way to get your Mac to take dictation. Its support for other actions is a mixed bag; some work well, some don’t. If nothing else, Vocal proves that Siri—natively—on the Mac could be nothing short of remarkable.

Smart home- or wearable tech: which is more likely to benefit your digital life this year?

I'm more likely to buy smart home- than wearable tech this yearI'm more likely to buy wearable- than smart home tech this yearI'll probably buy both smart home- and wearable tech this yearI'm unlikely to buy smart home- or wearable tech this yearNot sure/don't know