Robouniverse 2015 Recap

I went to yet another mini-conference in NY last week. I wanted to quickly write down some thoughts while I still remember (also because I'm currently watching a pirated copy of Ex Machina). Robouniverse (at least the free portions) was a lot smaller than I thought it would be, but that makes sense given that it's in its first year. Like the Inside 3D-Printing conference, it's run by Meckler Media and seems to be geared more towards investors and business folk looking to learn more about the industry, maybe half-networking and half-education. I went on day 2 of 3, since the 3 major keynotes included an ABB research head, Colin Angle of iRobot, and Rodney Brooks of Rethink Robotics. I've listened to talks by the last 2 before, but they were for academic audiences. Academia (imo) tends to suffer from tunnel vision quite often, and researchers may focus a bit too much on what's publishable instead of what's impactful, so hearing talks geared towards a broader view of progress in the industry, for practical goals, was a little refreshing.

ABB Research

I'll be honest, I don't remember much from this particular talk. It was mostly numbers regarding the growth of the commercial (as opposed to industrial and military) robotics. The speaker made the claim that the commercial robotics space is set to explode, but without really giving any concrete reasons as to why. I know that hardware tends to be very hush-hush as they go through longer development phases, but I'm unaware of any recent major technological developments (on the level of the Kinect, per se), so that would mean to me that any major growth would be due to repurposing of the tried-and-true existing tech, namely automated platforms capable of navigation, in new applications.

Aside from the oft-cited Roomba and Kiva cases, the speaker briefly described Vecna's advances in healthcare robotics through their automated food/medicine delivery platforms. All of these examples seem to be just implementations of 'smart' conveyor belts in less-structured environments where it's too costly to change the infrastructure. Any pick-and-place in terms of storing in and retrieval from the robot still depends on the human operator. More importantly, at least in the case of Kiva and Vecna, there now needs to be a trained operator to manage the system and debug it in case of failures. The primary challenge of integrating robotics into daily living lies (imo) in the training and preparation of the human factor, not improving or adapting the hardware for more use cases.

Colin Angle, iRobot

Full Disclosure 1/3: The first major project I assisted in after I started grad school was a collaboration with iRobot, and I didn't particularly enjoy it, so that may affect my bias here

iRobot's probably still the big player in commercial robotics, through the merits of the Roomba alone. Beyond that, they have some cool stuff for the military complex, like the PackBot and other rugged, surveillance bots. They provide some of the few (if not only) instances of rugged, untethered robotics for use outside of research conditions, which is certainly exciting, but it also seems that as a result, they're less likely to take risks and innovate. The one new product in-development that I hadn't heard of or seen before was a low-cost (sub-2k), anthropomorphic arm, which to me seems like a typical, uninspired move to try and capture another subset of the market that Rethink Robotics is already pursuing.

The one interesting tidbit Angle keyed in on was iRobot's current focus on mapping and localization. When you think about it, it's one piece of tech that's advancing by leaps and bounds, not only in terms of its resolution, but also in terms of the number of people with access. In most cases, I'd argue (and maybe Angle hinted at this as well) that applications only consider instantaneous states (ie. where are you now?), whereas knowing the history of an individual's motion provides another layer of rich information. Angle's primary example was a person who returns home, goes to the refrigerator, and then sits down at the couch. Just knowing that the person is by the couch doesn't provide the same degree of knowledge.

I think there are major challenges in figuring out how to semantically structure that data or pick out the critical timestamps or waypoints, problems that dirty Internet startup money has yet to even begin to solve, but location/mapping undoubtedly fills two criteria:

Localization/mapping (and also perception to a high degree) has always been a focus of robotics, and it's something that roboticists know and do quite well

There's an immense amount of accessible location data to be had and easily obtained, thanks to the prevalence of smart phones and increasingly cheap sensors.

As was the case with the Kinect, it seems to me that major developments in robotics has been predicated on a sudden increase in availability of new, affordable technology that can be used to generate extensive datasets that used to be difficult to acquire. That said, I'll make the argument now that even though the Kinect has allowed more researchers to do what only a few labs could do before (in terms of perception), it hasn't necessarily led to more innovation in that space, since there's always been other ways to detect and map environmental and object structure. However, considering localization history like Colin suggested relates more to inferences about human intent, which I think is an entirely different thing altogether.

Full Disclosure 2/3: In my first year of grad school, I had a major side project working on an online app that effectively created a temporal and geographic map of photos, so I may have a predisposition towards the benefits of localization

Rodney Brooks, Rethink Robotics

Full Disclosure 3/3: Rodney Brooks was my freshman year advisor, I've never had a high opinion of the Baxter robot, and the new Sawyer iteration confuses me even more.

So, I never got the point of the Baxter robot. It has two bulky arms with limited range and limited payload, and they can't be used in bimanual manipulation (granted, that's a super hard problem in academia). To me, money would be better spent on a Universal Robotics arm, especially if you're just going to end up doing some form of teach-and-play, which is what many of the Baxter demo videos show. In research, I've really only ever seen the Baxter used for dirt-simple pick-and-place tasks, though I guess the PR2 was used in much the same way. In manufacturing settings, it seemed like Baxter was often doing what a conveyor belt or chute would do. What a waste!

However, after listening to this version of Brooks' talk, and thinking more about his background in social/interactive robotics, it makes a lot more sense. First off, apparently the vast majority of manufacturing has very little automation. An extensive set of tooling and machinery doesn't necessarily equate automation. Repetitive fixturing and pick-and-place tasks are still done largely by people. A plug-and-play machine that can fulfill a variety of these tasks can actually be an attractive investment. The one thing I've never really considered (or realized) is that while the Baxter mostly does teach-and-play tasks, it's not purely a kinematic process, and it uses its onboard sensors to help adjust for variations between the human-taught maneuever and its final trajectory. For example, when tasked to brace an element against a stop in a machine, it tries (from Rethink's sample videos) to exert the net force on the object and not just follow a strict motion. It's a subtle different, and not even that much of a technical feat, but I think a necessary feature for these particular use cases.

I still think Rethink's approach, much like iRobot's, is wrong. Brooks seemed very quick to point out that the example videos he showed were done by customers with minimal guidance from Rethink, who are more focused on creating a flexible hardware/software platform. At the low price point that Baxter is targeting, I don't think it's smart (or even viable) to pursue a general-purpose solution. I think a lot of the example pick-and-place tasks shown in his presentation (even ones that required perception) could be done by a simple 3-dof cartesian gantry that could be built for a fraction of the Baxter's cost. While it's true that the Baxter can bypass the cost of an integrator or expert operator (and I actually think Rethink's done a great job towards that end, which is a testament to Brooks' background and expertise in that area), I think the customers in the manufacturing space are looking for a custom solution, which is not the same as a singular solution that can perform a set of custom tasks. I can't imagine many scenarios where the Baxter needs to be moved and repurposed for secondary tasks in a small manufacturing setting. In that case, if the primary benefit is avoiding the initial setup cost, is it really worth it to invest in a subpar solution? Maybe the real opportunity is a library of easily-customizable workcell components that can be quickly retro-fited by the consumer?

tldr; Ex Machina was a pretty insightful film, but the ending was pretty disappointing