curriculum vitæ

research statement

I am a computer scientist and roboticist who is passionate about making software and products that are easy to use and self-improving. With over ten years of concentrated academic research and real-world experience bringing consumer robots to market, I have become an expert in building and programming intelligent electromechanical systems. I thrive in environments that emphasize learning and collaboration, in which I can fully devote my skills to products and causes I care about.

contact

biography

Adam Setapen is a computer scientist and roboticist who is passionate about making software and products that are easy to use and self-improving. With over ten years of concentrated academic research and real-world experience bringing consumer robots to market, he has become an expert in building and programming intelligent electromechanical systems. Adam thrives in environments that emphasize learning and collaboration, in which he can fully devote his skills to products and causes he cares about.

Adam has published papers on machine learning, robot design, learning from demonstration, and novel robot control interfaces. His academic research looks at how humans can bootstrap autonomous systems with sparse datasets obtained through real-world interactions. Since entering industry, his work has focused on making products and algorithms that are easy to use and self-improving.
He has held positions as Lead Hardware Engineer for AltSchool, as a Roboticist for Romotive, 3D Robotics, and TRACLabs, Inc., and as a Software Engineer for Formlabs. Adam is a hacker at heart (and considers himself a "full-stack" roboticist), building his own robots and expressive objects to test his algorithms. He loves empowering people to be builders, teaching hands-on robotics courses and spending as much time in the shop as he can.

AltSchool

As Lead Hardware Engineer at AltSchool, I designed, prototyped, and maintained hardware devices for use by educators and students in the classroom. This included video cameras, microphones, wearables, and augmented spaces.

I continue to work with AltSchool as an expert educator, teaching robotics classes and helping educators create robotics and programming curricula for K-8 students.

Photos

Video

DragonBot

As robots begin to appear in people's everyday lives, it's essential that we understand natural ways for humans and machines to communicate, share knowledge, and build relationships. For years, researchers have been trying to make robots more socially capable by inviting human subjects into a laboratory to interact with a robotic character for a few minutes. But the laboratory doesn't share the complexity of the real world, and any possibility of a long-term interaction is hopeless. Enter the modern smart phone, which packs the essential functionality for a robot into a tiny always-connected package. The DragonBot platform is an Android-based robot specifically built for social learning through real-world interactions.

DragonBot is all about data-driven robotics. If we want robots capable of social interaction, we simply need a lot more examples of how humans interact in the real world. DragonBot's cellphone makes the platform deployable outside of the laboratory, and the onboard batteries are capable of powering the robot for over seven hours. This makes DragonBot perfect for longitudinal interactions - learning over time and making the experience more personalized. DragonBot is a "blended reality" character - one that can transition between physical and virtual representations. If you remove the phone from DragonBot's face, the character appears on the phone's screen in a full 3D model, allowing for interaction on the go.

I designed and built DragonBot from scratch, building on the lessons I learned through creating Nimbus. The robot uses the android phone for all of it's onboard computation, communicating with custom-built cloud services for computationally-heavy tasks like face detection or speech recognition. The phone performs motor control, 3D animation, image streaming, data capture, and much more. DragonBot uses a delta parallel manipulator with updated DC motors, custom motor controllers (made by Sigurður Örn), and precision-machined linkages. Two extra motors were added - head tilt (letting the robot look at objects on a tabletop or up at a user) and a wagging tail (which improves children's perception of the robot's animacy).

I'm currently using DragonBot to build models of joint attention in human-robot interactions. Most models of attention are based entirely on visual stimuli, but there is a lot of information contained in other sensory modalities about the social dynamics of an interaction. My ongoing work attempts to improve social attention through language, symbolic labeling, pointing and other non-verbal behavior. Through easy-to-use teleoperation interfaces and intelligent shared autonomy, my Masters thesis aims to make it much easier to "bootstrap" a robot's performance through large datasets of longitudinal interactions.

Relevant Technologies

Photos

Video

Romo

I was a Roboticist at Romotive, where we built a small iPhone-based robot to help teach children programming concepts through a lovable embodied character named Romo. While at Romotive I lead a software team of seven people, taking ownership of the personality and autonomy of Romo, as well as coordinating the robot's software architecture.

One of my primary contributions for Romo was a best-in-class iOS framework for realtime computer vision called RMVision. This framework allowed our robot to expertly track faces, follow lines on the floor, detect changes in brightness, and use natural training by a person to chase brightly colored objects. Using a combination of OpenCV and hardware-accelerated OpenGL shaders, this framework was able to squeeze every bit of performance out of both legacy and modern iOS devices.

3D Robotics

As a Roboticist at 3D Robotics, I prototyped hardware and software for the leading US drone company. I was in charge of main components of the video streaming and update system for Solo -- a "smart-drone" capable of creating cinematic aerial video. I developed realtime computer-vision prototypes and production-ready code using a combination of GStreamer (embedded), OpenCV and GPUImage (iOS). I also traveled to China to help with the production of Solo, where I created hardware jigs and software tests for the assembly line.

Relevant Technologies

Photos

Videos

Developing the "dronie" with Phu Nguyen, Kellyn Loehr, and Eric Liao.

Your browser does not support the video tag.

Nimbus

Collaborators: Marc Strauss, Hasbro

Nimbus is an exploration into using delta parallel manipulators for highly expressive tabletop robot characters. At the core of the platform lies a four degree-of-freedom delta manipulator, able to move the robot's head in all three translational directions and around a single rotational axis. Parallel manipulators, typically used in manufacturing pick-and-place robots, are also particularly well suited for creating expressive "squash-and-stretch" characters. Because animating the motion of the robot is as simple as controlling a single inverse kinematics handle, even people without animation expertise can easily program believable motions for Nimbus.

Nimbus also represents an exploration into robot "furs" that move organically with the kinematic constraints of the platform. Collaborating with engineers on the Soft Goods team at Hasbro, a sewing pattern was created for this platform that preserves the volume of the character, while deforming it like a balloon that is being squashed and stretched. Using passive elements like long-pile fur and silicone-casted hands and feet, Nimbus aims to increase believability with a very minimal number of manipulators.

The furry exterior also has fabric capacitive electrodes sewn in, allowing for detection of touch and pre-touch in six distinct locations on the robot's body. The robot wirelessly receives information about any people in the environment from a Microsoft Kinect hidden in the environment, and Nimbus was programmed to want to mimic any people in front of it. The video below shows the robot's motion, and illustrates the robot moving along with humans and expressing elation when the human's motion coincides with the robot's.

Photos

Videos

Playtime Computing

The Playtime Computing System is a technological platform that computationally models a blended reality interactive and collaborative media experience that takes place both on-screen and in the real world as a continuous space. On-screen audio-visual media (e.g., portraying virtual environments and characters – story world, etc.) have an extended presence into the physical environment using digital projectors, robotics, real-time behavior capture, and tangible interfaces. Player behavior is tracked using 3D motion capture as well as other sensors such as cameras and audio inputs.

Characters in this system can seemingly transition smoothly from the physical world to the virtual on-screen world through a physical enclosure that metaphorically acts as a portal between the virtual and the real. Any events or changes that happen to the physical character in the real world are carried over to the virtual world. Digital assets can be transitioned from the virtual to the physical world. These blended reality characters can either be programmed to behave autonomously, or their behavior can be controlled by the players.

My primary contribution was building the "trans-reality portal", the enclosure that transports the robot between physical and virtual representations. I also wrote the image stitching code that makes the eight projectors output a continuous environment, using a gaussian pattern from each projector and a single camera image capturing the scene to back-calculate the projector positions. This is where I had my first exposure to powerful realtime animation techniques through Touch Designer, under the guidance of David Robert. I learned a ton about setting up large audiovisual installations, exploiting graphics supercomputers, and building robot houses.

Relevant Technologies

Photos

Video

Formlabs

I worked with Formlabs when it was still a startup with about 10 people. I set up the first websites for the company and handled the design and development of PreForm software that Formlabs uses to stage and handle 3D objects before sending to the printer.

MDS

The MDS platform, which stands for mobile, dextrous, and social, is a humanoid robot that was designed to be able to naturally interact with people. I spent a few weeks working on Maddox, the newest MDS robot in the fleet. I wrote low-level linux drivers and calibration code for quick initialization of the robot's motor positions. Through working on MDS, I became familiar with the challenges of animating a highly-sophisticated humanoid, solving issues with high-level motion synthesis and low-level motor control.

Photo

Videos

MARIONET

MARIONET, or Motion Acquisition for Robots through Iterative Online Evaluative Training, is a framework I developed with my undergraduate/masters adviser, Dr. Peter Stone.

Although machine learning has improved the rate and accuracy at which robots are able to learn, there still exist tasks for which humans can improve performance significantly faster and more robustly than computers. While some ongoing work considers the role of human reinforcement in intelligent algorithms, the burden of learning is often placed solely on the computer. These approaches neglect the expressive capabilities of humans, especially regarding our ability to quickly refine motor skills. In this paper, we propose a general framework for Motion Acquisition for Robots through Iterative Online Evaluative Training (MARIONET). Our novel paradigm centers around a human in a motion-capture laboratory that "puppets" a robot in realtime. This mechanism allows for rapid motion development for different robots, with a training process that provides a natural human interface and requires no technical knowledge. Fully implemented and tested on two robotic platforms (one quadruped and one biped), our research has demonstrated that MARIONET is a viable way to directly transfer human motion skills to robots.

Photos

Video

SnakeBot

This highly articulated snake-like robot uses non-traditional actuators. Through reverse jamming of granular media by creating a vacuum, the segments of the manipulator can individually transition between solid-like states and fluid ones. Combined with traditional off-board motors and tension cables to achieve complex manipulator configurations, I helped design the software that enabled the motion of the platform.

Photos

Video

Your browser does not support the video tag.

electrello

I've wanted an electric cello since I was old enough to realize they existed. My parents, both professional classical musicians, started me on the cello when I was three years old. But I gravitated towards an electric guitar as a rebellious teenager, and since then I've anxiously waited to combine the soothing tones of the cello with the warm hum of a vintage tube amp. When I took Neil Gershenfeld's whirlwind class - How To Make (almost) Anything - I knew I had to design and build a cello to call my own.

Most electric cellos are either too expensive or don't have the same "feel" as a traditional instrument. electrello is a low-cost instrument that retains the feel of a traditional cello while allowing the performer to move more freely, due to the motion provided by the four-bar linkages which the player grips with their legs. The bow is outfitted with a wireless accelerometer and vibration motor, packed into a compact 3D-printed enclosure that can fit on any cello bow. The accelerometer records the movements of the bow and can store this data for analysis or use it in realtime. For example, an audio effect - like distortion - could be applied to the sound based on bow speed, intensifying the faster passages of a piece. The vibration motor is primarily an idea for remote lessons, where a teacher can provide haptic feedback to a student in an unobtrusive way.

The body of the instrument contains an android phone, which can wirelessly communicate with the bow and display relevant information based on the instrument's sound. Also, the integrated microphone can stream the sound over the internet, for a web-based performance or a remote teaching session. A $1.50 piezo and simple instrumentation amplifier capture the vibrations from the bridge and convert them into an audio signal, and I have plans to add a magnetic coil pickup to allow for a more grungy and distorted tone. Originally, I wanted the phone to act as an effects box and transcription device, but at the time cellphones couldn't handle simultaneously doing analog-to-digital and digital-to-analog conversion.

TRACBot

Collaborators: Aaron Hill, Dr. Patrick Beeson, Dr. David Kortenkamp

TRACBot is a differential-drive robot I built from the ground up while interning at TRACLabs, Inc. I designed the robot to work with the Player/Stage/Gazebo software stack, the predecessor to the now-popular ROS framework. I integrated a wide variety of sensors such as LIDAR, thermal sensors, infrared rangers, cameras, and microphones. I also helped design the software architecture to exploit this rich sensory data. After my internship ended, I was hired as a part-time programmer to fabricate simulated 3D models and environments for the robot. Working on TRACBot exposed me to problems in robotics I might never encounter in academia, and it was an incredible learning experience.

Relevant Technologies

Photos

The Cnidarian

When Naomi Darian is poisoned by jellyfish venom, she transforms into The Cnidarian - a jellyfish super-villain created for the TEI 2011 Design Challenge. A custom dress outfitted with electroluminescent tentacles and a pulsing hood shrouds the mysterious Cnidarian. She attacks in a flash, with the palms of her gloves housing ultra-bright bulbs from a pair of hacked disposable cameras. I did the electronics for the EL wire, the motion control for the hood, and composed the music for the video (superhero theme song, bucket list item checked).

Relevant Technologies

Costume design, motion control, EL wire, Ableton Live

Photos

More information

Telescrapbook

Telescrapbook is a set of remote sticker-books that are wirelessly connected, and they are both educational and customizable. Telescrapbook presents I/O Stickers, adhesive sensors and actuators that children can use to create personalized remote communication interfaces. By attaching I/O Stickers to special greeting cards, children can invent ways to communicate with long-distance loved ones with personalized, connected messages. Children decorate these cards with their choice of craft materials, creatively expressing themselves while making a functioning interface. The low-bandwidth connections leave room for children to design not only the look and function, but also the signification of the connections.

Telescrapbook is the wonderful work of Jie Qi and Natalie Freed, who let me help out with some coding and soft-sensor making.

Relevant Technologies

C, Arduino

Photos

Video

SEEDpower

SEEDpower is an integrated solution for power management and regulation on small-to-medium sized robots. With full isolation of logic and motor power sources, the board supports 3-channel input (up to 3 batteries) and 4-channel output (Motor voltage, +12V, +5V, and +3.3V). Any of the two input batteries may be placed in series or parallel (using on-board jumpers), and the output is fully protected with both fuses and flyback diodes. The board supports "plug-and-play" charging, using an onboard relay to switch to an external supply whenever the robot is plugged in.

I built a few custom charging stations to charge the batteries inside DragonBot and Huggable while simultaneously powering the robots on an external supply. It charges up to three Lithium Polymer batteries and provides external power via two 110W power supplies. The lockable charging stations are kid-friendly, having only a single power umbilical with an industrial-grade polarized connector. The front of the charging station has an LED matrix, indicating power levels for the batteries and current draw for the external supplies.

artbots

Robots don't always have to perform a function. Building robots and physical objects that evoke strong emotions has always been a passion of mine. Here are some of the less-than-functional robots I've made for the purpose of artistic expression.

Photos

Verna

Cuboogie

food_clock

wiggler

Videos

Collaborators: Chris Gutierrez, Dr. Mark Williams

In the summer of 2007, I was chosen for an NSF Research Experience for Undergraduates at the University of Virginia focusing on computing in medicine. In a joint venture with the Department of Computer Sciences and School of Medicine, I spearheaded a project titled "Portable, Inexpensive, and Unobtrusive Accelerometer-based Geriatric Gait Analysis." Collaborating closely with a gerontologist, Dr. Mark Williams, we attached wireless accelerometers to the ankles, wrists, and waists of geriatric patients and recorded their walking movements. Using signal processing and supervised machine learning techniques, we were able to detect diseases such as Alzheimer's, spastic hemiparesis, and spastic paraparesis with surprising accuracy. We also developed GaitMate, a tool for aiding physicians in using this machine learning data for diagnosis in clinical gait analysis. Dr. Williams has continued to build on my work, and plans to release a commercial version in the near future. Applications of this research include prediction and confirmation of geriatric disorders, telemedicine, and long-term analysis.

Relevant Technologies

MATLAB, IMUs, Supervised Machine Learning

Photos

More information

Robocup

Collaborators: Peter Stone

For 2 years I worked on the UT Austin Villa Robot Soccer team. During this time, I worked on creating motion primitives through human training (imitation learning teaching the robots how to walk and kick). I also worked on models of teamwork for passing and helped build a set of development tools for the Sony AIBO and Aldebaran Nao.