Saturday, July 16, 2011

A laser diode is formed by doping a very thin layer on the surface of a crystal wafer. The crystal is doped to produce an n-type region and a p-type region, one above the other, resulting in a p-n junction, or diode.
...
When an electron and a hole are present in the same region, they may recombine or "annihilate" with the result being spontaneous emission — i.e., the electron may re-occupy the energy state of the hole, emitting a photon with energy equal to the difference between the electron and hole states involved. Spontaneous emission gives the laser diode below lasing threshold similar properties to an LED.

The structure, a lasing medium between two conductive partial mirrors, is simple:

The physical chemistry is more complicated, and it took a lot of research to find out how to get it right so they work at room temperature, are cheap, and last a while.

This is a visible light micrograph of a laser diode taken from a CD-ROM drive. Visible are the P and N layers distinguished by different colours. Also visible are scattered glass fragments from a broken collimating lens.

The first laser diode to achieve continuous wave operation was a double heterostructure demonstrated in 1970 essentially simultaneously by Zhores Alferov and collaborators (including Dmitri Z. Garbuzov) of the Soviet Union, and Morton Panish and Izuo Hayashi working in the United States. However, it is widely accepted that Zhores I. Alferov and team reached the milestone first. For their accomplishment and that of their co-workers, Alferov and Kroemer shared the 2000 Nobel Prize in Physics.

I don't think computers will have any important effect on the arts in 2007. When it comes to the arts they're just big or small adding machines. And if they can't "think," that's all they'll ever be. They may help creative people with their bookkeeping, but they won't help in the creative process.

Kinect-Hacking Conference

Art && Code: 3D is a festival-conference about the artistic, technical, tactical and cultural potentials of 3D scanning and sensing devices — especially (but not exclusively) including the revolutionary Microsoft Kinect sensor. This highly interdisciplinary event will bring together, for the first time, tinkerers and hackers, computational artists and designers, industrial game developers, and leading researchers from the fields of computer vision, HCI and robotics. Half-maker’s festival, half-academic symposium, Art && Code: 3D will take place October 21-23 at Carnegie Mellon University in Pittsburgh, and will feature:

Omek Interactive, a provider of tools that enables companies to incorporate gesture recognition and full body tracking into their applications and devices, has secured $7 million in financing in a round led by Intel Capital, TechCrunch has learned. The Series C round brings the company’s total funding raised to nearly $14 million.
Omek’s Beckon technology converts the raw depth map data from most major 3D cameras into an awareness of people and their movements or positions in front of the camera, enabling them to be converted into commands that control hardware or software.

Nearly a million people have watched UC Berkeley's PR2 folding towels and sorting socks on YouTube, and it's easy to understand why: having a robot that can do your laundry is a fantasy that's been around since The Jetsons, and while we're not there yet, it's not nearly as far off a future as it was before the PR2 Beta Program. Since those demos, one of the research groups at Berkeley has been working on ways of making the laundry cycle faster, more efficient, and more complete, and for starters, they've taught their PR2 to reliably handle your pants.
The goal of Pieter Abbeel’s group is to teach a robot to solve the laundry problem. That is, to develop a system to enable a robot to go into a home it's never seen before, load and unload a washer and dryer, and then fold the clean clothes and put them away just like you would. The first aspect of this problem that the group tackled was folding, which is one of those things that seems trivial to us but is very difficult for a robot to figure out since clothes are floppy, unpredictable, and often decorated with tasteless and complicated colors and patterns.

Although robots have made great strides in manufacturing, where tasks are repetitive, they are still no match for humans, who can grasp things and move about effortlessly in the physical world. Designing a robot to mimic the basic capabilities of motion and perception would be revolutionary, researchers say, with applications stretching from care for the elderly to returning overseas manufacturing operations to the United States (albeit with fewer workers).
Yet the challenges remain immense, far higher than artificial intelligence hurdles like speaking and hearing.

Monday, July 11, 2011

Distinguishing between sarcasm and irony is ironic. Expressing an opinion about the distinction is sarcastic.

Claire L Evans, from Portland Or, writes about (and does art/performance about) science and technology issues. Her sequence of blogs about moon arts strike a chord.

What gestures should a vision system understand? A prime requirement is that gestures should be easily learned by humans, but it is not clear what is most natural or effective. Here's a discussion of future gesture interfaces for the Kinect. It's anybody's guess as to what will work well enough for people. The best way to find out is to try things:

While dropping off Cosmo for a summer camp I had the chance to check out their exhibit of video games, Game On 2.0, at OMSI: I wasn't expecting much, but was very impressed. Not only are the games set up well to be played, there was an impressive range of platforms and hardware, from handheld to pinball. Mixed in was good information about the game industry, game development and some original art. The tacit message was that computer games are not just a bit of pointless fun, but a driver of the computer industry.

Play your way through the past, present, and future of global gaming. From Pong to Gran Turismo, Game On 2.0 is a hands-on experience of video game history and culture, and includes over 125 playable games, including Mario All Stars, Wii Sports, Gran Turismo, Halo Reach, Pacman, Zelda and Sonic the Hedgehog.

Explore over 40 years of gaming entertainment; from the very first commercial coin-op game to the latest in virtual reality and 3D technology. Game On 2.0 celebrates game design, development, and production including original concept and character art and history’s most influential arcade consoles.

While I think the AI will advance enough to achieve this goal, I predict that the power technology will be woefully inadequate to the task.Electric cars have been around for more than 100 years now, and everyone agrees they are a good idea, but you still can't plug in a Prius.

And the best soccer team won't be humanoid, no matter how photogenic; we are just not that cute or functional on the pitch:

I think that the soccer playing premise is good, but I'm more interested in video game playing robots. Games are designed to tweak our interest and exercise our humanity, while soccer is largely a sport for spectators. When will a robot master Pong, PacMan, or World of Warcraft?

I don't know the answer, but this is what will close the gap in embodied intelligence. Below is a first step toward that goal, a three relay bot that "plays" the Tower of Hanoi faster than a human, on a device deigned to use human gestures (taps). While the sequence of moves is not found autonoumously, it won't be long before robots will entertain themselves by game play.

What do robots look like? Often it assumed they will evolve toward some sort of human form, and many are wedged into this bipedal mold.What about real robots, those designed for practical use where there is less anthropomorphic social pressure?
Industrial robots have decades of experience now, and their range of forms has settled down. They have one arm and no legs, and are typically bolted to the floors or on a specialized gantry. Here's one marketed to foundries, the KUKAKR 1000 Titan:

While this is a larger model, its body plan is recognizable in a wide range of industrial bots. Most industrial robots are quite a bit larger than humans, even though they are typically used for human scale products like cars. They don't know their own strength, and don't pay much attention to humans, so photos of them with humans are not common. Here's a similar KUKA bot swinging around a couple humans like so much meat (Wikipedia, Robocoaster):

What about autonomous robots? They have much different design requirements and are still evolving. Often they have a more car-like body plan. After several iterations we now know what the near-future looks like, the most sophisticated semi-autonomous bot ever:

NASA Mars Science Laboratory rover, Curiosity, was taken during mobility testing on June 3, 2011. The location is inside the Spacecraft Assembly Facility at NASA's Jet Propulsion Laboratory, Pasadena, Calif.
Preparations are on track for shipping the rover to NASA's Kennedy Space Center in Florida in June and for launch during the period Nov. 25 to Dec. 18, 2011.
JPL, a division of the California Institute of Technology in Pasadena, manages the Mars Science Laboratory mission for the NASA Science Mission Directorate, Washington. This mission will land Curiosity on Mars in August 2012. Researchers will use the tools on the rover to study whether the landing region has had environmental conditions favorable for supporting microbial life and favorable for preserving clues about whether life existed.

Saturday, July 2, 2011

Jack Eisenmann, a programmer who just graduated high school, has built his own 8-bit homebrew computer completely from scratch using an old keyboard, a television, and a ton of TTL logic chips. No, he didn't buy some computer parts and snap them together; he blueprinted every wire and connection and then built it, wire by wire. After he finished construction, he had to teach it how to communicate, so he created his own operating system and wrote some games for it.

In recent months, a pool of innovative L.A.-based artists who create music in an electronic subgenre called chiptune have formed the Obsolete collective, and have commenced throwing shows to celebrate their lo-bit love affair.

The water-filled bowls, when rubbed with a leather-wrapped mallet, exhibit a lively dance of water droplets as they emit a haunting sound. Now slow-motion video has unveiled just what occurs in the bowls; droplets can actually bounce on the water's surface.

Designed by Kieron Gillen and funded by Britain's Channel 4, The Curfew takes place in 2027 in a UK dominated by "the Shepherd Party," which plays on fears of terrorism to impose near-absolute control over its citizens. They do so through gamification; you earn "citizen points" for obedience, and lose them through disobedience. Earn enough, and you can be a "Class A" citizen; its not clear what this gets you, other than jumping the queue at fast food joints.

She's been called the Queen Mother of science fiction, but today Usula Le Guin is finishing a book of poems about Steens Mountains country, and working with photographer Roger Dorband. We join them in Harney county as she talks about her new book and reflects on her life as a writer.