Sergey Brin, CEO and co-founder of Google, watches a live broadcast as two parachutists prepare to jump out of a plane while wearing Google Glass during Google I/O 2012 at Moscone Center in San Francisco on June 27, 2012

I’ve just slipped on a pair of Google’s funky, futuristic eyewear, which looks a little like half of a lensless, tricked-out pair of Oakleys. I’m also driving. Out the window, I see an interstate sign — through my pair of Google glasses, a minimalist GPS overlay indicates this is my exit. I take it. As I’m pulling down the ramp, I tilt my gaze up at piles of gray clouds, the sun just starting to emerge. Through Google’s tiny, cyclopean display, I notice the temperature outside is 53°F and rising. Turning right at the ramp intersection, I can just see a commercial jet lifting off from the airport a few miles down the road. The readout in my heads-up display indicates my flight is on time. All the while, my phone hasn’t moved from my shirt pocket, my eyes haven’t left the road and my hands haven’t left the wheel.

O.K., none of this is actually happening — I’ve yet to lay hands on a pair of Google’s ballyhooed cyberglasses. In fact I’m only vaguely interested in head-mounted technology, whether it’s coming from Nintendo, James Cameron or Sergey Brin. It sounds — and every time I’ve tried it, feels — like a compromise on the longer road to implanting subdermal CPUs and hardwiring our optic nerves.

But that’s still decades away (well, probably), and whether I’m Google’s target audience or not, these things are coming: a $1,500 cyberpunk geek’s dream come true, backed by a multinational corporation with a guiding hand in how we aggregate information — search, news, e-mail, maps, video, documents, translation, social networking and more — already today. Augmented-reality devices have been around for decades, but never quite like this. Smartphones and tablets do it; so can dedicated handhelds like Nintendo’s 3DS. But those devices require hands. Google Glass sits on your face, more or less the way any pair of glasses would, and instead of firing a laser at your retina, it simply displays information on a small glass monocle perched slightly above one eye.

And yet the technology’s already perturbing lawmakers: the West Virginia legislature just introduced an amendment to an existing bill to establish “the offense of operating a motor vehicle using a wearable computer with a head-mounted display.” West Virginia already bans texting while driving or using a phone without a hands-free device; the amendment would add “using a wearable computer with head-mounted display” to its list of operational no-nos.

But isn’t Google Glass also a hands-free device for your eyes? A way of potentially freeing you from looking at things that might otherwise take your eyes completely off the road, whether glancing at your phone to check the time or answer a call or scan the weather?

Let’s review some of the metrics on distracted driving. As I noted in a story on “texting-blocking” tech last year, according to a National Safety Council (NSC) report, in 2010, 21% of all crashes (1.1 million total) involved people talking on handheld or hands-free cell phones. On top of that, an additional 3% or more (at least 160,000) crashes involved texting. And the rates have gone up since: so far in 2012, the NSC estimates that a crash involving “drivers using cell phones and texting” occurs every 24 seconds.

You won’t find many defending the right to text while driving, but would wearing a head-mounted display be the same thing? Isn’t the question less about whether we ought to allow heads-up technology in vehicles — we’re already required to pay attention to all sorts of rapid-fire, incoming visual information while driving — but of how much and what type of information should be allowable? Shouldn’t we at least consider whether a display’s relationship to the physical world around us can be such that it’s either at worst, innocuous, or at best, actually helpful (and not a distraction)?

Is absorbing information through a transparent lens while maintaining line of sight through your windshield the same thing as looking down at your smartphone and taking your eyes completely off the road? I don’t know that the former’s necessarily safer, since you still have to play a depth-of-field and focus game, but — at least in theory — your reaction time might improve. I’m no expert on reaction times and cognitive processes and how these interrelate to where your visual focus is, but shouldn’t we run case studies before we throw blanket legislation at hypotheticals? Allowing drivers to wear heads-up displays firing information at their eyeballs nonstop may indeed be dangerous, but allowing unstudied legislative paranoia to supersede careful research is equally so. At least do the research first, right?

I’m speculating here, but I’m wondering whether a pair of Google glasses with “driver-mode” restrictions and basic snippets of information might not be a safer way to drive. I don’t know about you, but I have to look down at the dashboard to check my speed, my gauge lights, my fuel efficiency, my GPS navigation readout, how much fuel I have left and so forth. These things require I take my eyes off the road no matter what.

Now imagine a heads-up display that allowed you to keep your line of sight trained on the road while providing navigational info — optionally, of course, since with Google’s glasses, we’re talking about a single monocle, not two lenses that cover both eyes like a regular pair of glasses. Imagine that monocle displaying basic GPS information as you drive, perhaps drawing an outline around a road sign indicating a turnoff and generally placing unobtrusive signaling indicators over the world in real time — not unlike the way NFL broadcasters indicate the mechanics of a play on the field using video-overlay technology.

Just because you add something to your visual field doesn’t mean it has to be distracting. Indeed, some types of augmentation might be harmlessly supplemental. The question shouldn’t be whether to ban all forms of augmented information (and certainly not out of the gate, in the absence of research), but which forms of augmented information might be safe — or, indeed, might actually enhance driving safety. We already have operational guidelines for vehicles and their subsystems. Is it such a stretch to imagine a world in which devices like Google’s glasses are legal while driving, so long as they adhere to operational strictures based on careful research?