Angling to develop for Google Glass? Google gives some insight

A developer advocate outlines an API and some design principles.

A demo of how to use the mirror API and its output during Timothy Jordan's talk.

If you’re looking for a taste of what it will be like to develop for Google Glass, the company posted a video demonstrating the hardware and a little bit of the API on Thursday. Timothy Jordan, a senior developer advocate at Google, gave a talk at SXSW in early March that lasted just shy of an hour and gave a look into the platform.

Google Glass bears more similarity to the Web than the Android mobile operating system, so developing for it is simpler than creating an Android application. During the talk, Jordan goes over some the functionality developers can get out of the Mirror API, which allows apps to pop Timeline Cards into a user’s view, as well as show new items from services the user might be subscribed to (weather, wire services, and so forth).

Jordan also shows how users can interact with items that crop up using the API. When the user sees something they like, for instance, they can re-share it with a button or “love” it.

The talk also lays out some overarching design principles for the system, including “don’t get in the way”—Jordan specifies that the interface is in the users’ field of view but off to the side, not in the way of their line of sight—and “keep it timely,” imploring developers to work with data from “the last few months or year.”

Sneak peak is right... took him 50 minutes to reiterate the already known, at a kindergarten level. Is this a crowd of developers or potential investors? I can't tell because this is both a simple yet lengthy video.

Google should focus on more informative demonstrations of these glasses and less on marketing. Lately it seems like Google Glasses are meant for indie-music listeners that will only use them to take pictures of each other's new outfits.

This skinny jeans wearing evangelist, Timothy, probably tells everyone to just call him Jordan. He has Google Glasses on, yet he still looks down and reads off the podium. For some reason, I would think #1 he could control a slideshow with these glasses, #2 use them as a TelePrompter, or #3 they could at least show what he sees and hears to the live audience.

Even if these demos were just under-derived mundane situations involving construction, production, medical, clerical/administrative, or media/gaming applications in action... it would be a better demo than these paid hipsters declaring it the best new thing.

I was hoping Glass would be seen as remote device of your smartphone with a low level API to access the display, the speaker and mic, the touch control thing and the gyro sensors and do whatever you want with them (much like a bluetooth headset is a remote speaker/mic).

instead we get a walled, Google centric, web level API which is limited to a basic 'card' system and everything you do with it must go thru Google servers.

While it's nice for rapid development, it's extremely limited in what you can do with it.

it's very Apple'ish too and looks like Google wants to keep a very firm grip on what people will do with Glass.

at least at start, may be later they'll open up a bit more the device. otherwise this will just smother innovation.

If this is a representative example of the only interfaces available to program for Glass it doesn't look very promising; it just seems so.... limited.

Google is introducing a fully new class of product here - even as clever as they are, they can't possibly conceive of the applications and use cases people could come up with for it. Why hamstring it with such a nearsighted programming model that I can *already* imagine a bunch of great uses for the device would be incompatible with, let alone all the clever stuff people around the world could dream up?

So how much is Google Glass going to cost? Sure, I'd like to develop for it, who wouldn't want to do something new, but if it's going to be another Microsoft Surface that doesn't sell because it's too expensive, I have better things to do with my time. Google needs to give developers some sort of road map for how Glass is going to be adopted by the masses.

I'm worried that according to the presentation, little to none context is involved in the glass apps. It's the exact opposite of what I expected. Reading news and taking/sharing photos? Seriously? I can do that, and I can do that better, with my phone.

Either glass can feed context and real world events to the services or it will be REALLY limited in usefulness.