This paper addresses two issues with the development of ethical algorithms for autonomous vehicles. One is that of uncertainty in the choice of ethical theories and utility functions. Using notions of moral diversity, normative uncertainty, and autonomy, we argue that each vehicle user should be allowed to choose the ethical views by which the vehicle should act. We then deal with the issue of indirect discrimination in ethical algorithms. Here we argue that equality of opportunity is a helpful concept, which could be applied as an algorithm constraint to avoid discrimination on protected characteristics.

@InProceedings{Brandao2018robophil,
author = {Martim Brandao},
editor = {Mark Coeckelbergh and Janina Loh and Michael Funk and Johanna Seibt and Marco Norskov},
title = {Moral Autonomy and Equality of Opportunity for Algorithms in Autonomous Vehicles},
booktitle = {Envisioning Robots in Society - Power, Politics, and Public Space},
series = {Frontiers in Artificial Intelligence and Applications},
volume = {311},
year = {2018},
publisher = {IOS Press},
pages = {302--310},
abstract = {This paper addresses two issues with the development of ethical algorithms for autonomous vehicles. One is that of uncertainty in the choice of ethical theories and utility functions. Using notions of moral diversity, normative uncertainty, and autonomy, we argue that each vehicle user should be allowed to choose the ethical views by which the vehicle should act. We then deal with the issue of indirect discrimination in ethical algorithms. Here we argue that equality of opportunity is a helpful concept, which could be applied as an algorithm constraint to avoid discrimination on protected characteristics.},
doi = {10.3233/978-1-61499-931-7-302},
isbn = {978-1-61499-931-7},
topic = {Robot ethics},
url = {http://www.martimbrandao.com/papers/Brandao2018-robophil.pdf}
}

Modeling heat transfer is an important problem in high-power electrical robots as the increase of motor temperature leads to both lower energy efficiency and the risk of motor damage. Power consumption itself is a strong restriction in these robots especially for battery-powered robots such as those used in disaster-response. In this paper, we propose to reduce power consumption and temperature for robots with high-power DC actuators without cooling systems only through motion planning. We first propose a parametric thermal model for brushless DC motors which accounts for the relationship between internal and external temperature and motor thermal resistances. Then, we introduce temperature variables and a thermal model constraint on a trajectory optimization problem which allows for power consumption minimization or the enforcing of temperature bounds during motion planning. We show that the approach leads to qualitatively different motion compared to typical cost function choices, as well as energy consumption gains of up to 40%.

@InProceedings{Tan2018,
author="Tan, Wei Xin and Brandao, Martim and Hashimoto, Kenji and Takanishi, Atsuo",
editor="Giuliani, Manuel and Assaf, Tareq and Giannaccini, Maria Elena",
title="Trajectory Optimization for High-Power Robots with Motor Temperature Constraints",
booktitle="Towards Autonomous Robotic Systems",
year="2018",
publisher="Springer International Publishing",
address="Cham",
pages="3--14",
abstract="Modeling heat transfer is an important problem in high-power electrical robots as the increase of motor temperature leads to both lower energy efficiency and the risk of motor damage. Power consumption itself is a strong restriction in these robots especially for battery-powered robots such as those used in disaster-response. In this paper, we propose to reduce power consumption and temperature for robots with high-power DC actuators without cooling systems only through motion planning. We first propose a parametric thermal model for brushless DC motors which accounts for the relationship between internal and external temperature and motor thermal resistances. Then, we introduce temperature variables and a thermal model constraint on a trajectory optimization problem which allows for power consumption minimization or the enforcing of temperature bounds during motion planning. We show that the approach leads to qualitatively different motion compared to typical cost function choices, as well as energy consumption gains of up to 40%.",
doi = {10.1007/978-3-319-96728-8_1},
isbn = "978-3-319-96728-8",
keywords = {Humanoid robots, Optimization},
topic = {Optimization},
url = {http://www.martimbrandao.com/papers/Tan2018-taros.pdf}
}

Complex robots such as legged and humanoid robots are often characterized by non-convex optimization landscapes with multiple local minima. Obtaining sets of these local minima has interesting applications in global optimization, as well as in smart teleoperation interfaces with automatic posture suggestions. In this paper we propose a new heuristic method to obtain sets of local minima, which is to run multiple minimization problems initialized around a local maximum. The method is simple, fast, and produces diverse postures from a single nominal posture. Results on the robot WAREC using a sum-of-squared-torques cost function show that our method quickly obtains lower-cost postures than typical random restart strategies. We further show that obtained postures are more diverse than when sampling around nominal postures, and that they are more likely to be feasible when compared to a uniform sampling strategy. We also show that lack of completeness leads to the method being most useful when computation has to be fast, but not on very large computation time budgets.

@InProceedings{Brandao2017maxperturbmin,
author = {Martim Brandao and Kenji Hashimoto and Atsuo Takanishi},
title = {Maximize-perturb-minimize: a fast and effective heuristic to obtain sets of locally optimal robot postures},
booktitle = {IEEE International Conference on Robotics and Biomimetics (ROBIO)},
year = {2017},
month = {Dec},
abstract = {Complex robots such as legged and humanoid
robots are often characterized by non-convex optimization
landscapes with multiple local minima. Obtaining sets of these
local minima has interesting applications in global optimization,
as well as in smart teleoperation interfaces with automatic
posture suggestions.
In this paper we propose a new heuristic method to obtain
sets of local minima, which is to run multiple minimization
problems initialized around a local maximum. The method
is simple, fast, and produces diverse postures from a single
nominal posture. Results on the robot WAREC using a
sum-of-squared-torques cost function show that our method quickly
obtains lower-cost postures than typical random restart strategies.
We further show that obtained postures are more diverse
than when sampling around nominal postures, and that they
are more likely to be feasible when compared to a uniform sampling
strategy. We also show that lack of completeness leads
to the method being most useful when computation has to be
fast, but not on very large computation time budgets.},
doi = {10.1109/ROBIO.2017.8324815},
keywords = {Humanoid robots, Optimization},
topic = {Optimization},
url = {http://www.martimbrandao.com/papers/Brandao2017-robio.pdf},
}

M. Brandao, K. Hashimoto, and A. Takanishi, Sgd for robot motion? the effectiveness of stochastic optimization on a new benchmark for biped locomotion tasks, in 17th ieee-ras international conference on humanoid robots, 2017. [Abstract][BibTeX][PDF][DOI]

Trajectory optimization and posture generation are hard problems in robot locomotion, which can be non-convex and have multiple local optima. Progress on these problems is further hindered by a lack of open benchmarks, since comparisons of different solutions are difficult to make. In this paper we introduce a new benchmark for trajectory optimization and posture generation of legged robots, using a pre-defined scenario, robot and constraints, as well as evaluation criteria. We evaluate state-of-the-art trajectory optimization algorithms based on sequential quadratic programming (SQP) on the benchmark, as well as new stochastic and incremental optimization methods borrowed from the large-scale machine learning literature. Interestingly we show that some of these stochastic and incremental methods, which are based on stochastic gradient descent (SGD), achieve higher success rates than SQP on tough initializations. Inspired by this observation we also propose a new incremental variant of SQP which updates only a random subset of the costs and constraints at each iteration. The algorithm is the best performing in both success rate and convergence speed, improving over SQP by up to 30% in both criteria. The benchmark’s resources and a solution evaluation script are made openly available.

@InProceedings{Brandao2017legopt,
author = {Martim Brandao and Kenji Hashimoto and Atsuo Takanishi},
title = {SGD for robot motion? The effectiveness of stochastic optimization on a new benchmark for biped locomotion tasks},
booktitle = {17th IEEE-RAS International Conference on Humanoid Robots},
year = {2017},
month = {Nov},
abstract = {Trajectory optimization and posture generation are hard problems in robot locomotion, which can be non-convex and have multiple local optima. Progress on these problems is further hindered by a lack of open benchmarks, since comparisons of different solutions are difficult to make.
In this paper we introduce a new benchmark for trajectory optimization and posture generation of legged robots, using a pre-defined scenario, robot and constraints, as well as evaluation criteria. We evaluate state-of-the-art trajectory optimization algorithms based on sequential quadratic programming (SQP) on the benchmark, as well as new stochastic and incremental optimization methods borrowed from the large-scale machine learning literature. Interestingly we show that some of these stochastic and incremental methods, which are based on stochastic gradient descent (SGD), achieve higher success rates than SQP on tough initializations. Inspired by this observation we also propose a new incremental variant of SQP which updates only a random subset of the costs and constraints at each iteration. The algorithm is the best performing in both success rate and convergence speed, improving over SQP by up to 30% in both criteria.
The benchmark's resources and a solution evaluation script are made openly available.},
doi = {10.1109/HUMANOIDS.2017.8239535},
keywords = {Humanoid robots, Humanoid locomotion, Optimization},
topic = {Humanoid and legged robot locomotion;Optimization},
url = {http://www.martimbrandao.com/papers/Brandao2017-humanoids.pdf},
}

In this paper we tackle the problem of visually predicting surface friction for environments with diverse surfaces, and integrating this knowledge into biped robot locomotion planning. The problem is essential for autonomous robot locomotion since diverse surfaces with varying friction abound in the real world, from wood to ceramic tiles, grass or ice, which may cause difficulties or huge energy costs for robot locomotion if not considered. We propose to estimate friction and its uncertainty from visual estimation of material classes using convolutional neural networks, together with probability distribution functions of friction associated with each material. We then robustly integrate the friction predictions into a hierarchical (footstep and full-body) planning method using chance constraints, and optimize the same trajectory costs at both levels of the planning method for consistency. Our solution achieves fully autonomous perception and locomotion on slippery terrain, which considers not only friction and its uncertainty, but also collision, stability and trajectory cost. We show promising friction prediction results in real pictures of outdoor scenarios, and planning experiments on a real robot facing surfaces with different friction.

@INPROCEEDINGS{Brandao2016planning,
author = {Martim Brandao and Yukitoshi Minami Shiguematsu and Kenji Hashimoto
and Atsuo Takanishi},
title = {Material Recognition CNNs and Hierarchical Planning for Biped Robot
Locomotion on Slippery Terrain},
booktitle = {16th IEEE-RAS International Conference on Humanoid Robots},
year = {2016},
pages = {81-88},
month = {Nov},
note = {[Best Oral Paper Award Finalist]},
abstract = {In this paper we tackle the problem of visually predicting surface
friction for environments with diverse surfaces, and integrating
this knowledge into biped robot locomotion planning. The problem
is essential for autonomous robot locomotion since diverse surfaces
with varying friction abound in the real world, from wood to ceramic
tiles, grass or ice, which may cause difficulties or huge energy
costs for robot locomotion if not considered. We propose to estimate
friction and its uncertainty from visual estimation of material classes
using convolutional neural networks, together with probability distribution
functions of friction associated with each material. We then robustly
integrate the friction predictions into a hierarchical (footstep
and full-body) planning method using chance constraints, and optimize
the same trajectory costs at both levels of the planning method for
consistency. Our solution achieves fully autonomous perception and
locomotion on slippery terrain, which considers not only friction
and its uncertainty, but also collision, stability and trajectory
cost. We show promising friction prediction results in real pictures
of outdoor scenarios, and planning experiments on a real robot facing
surfaces with different friction.},
doi = {10.1109/HUMANOIDS.2016.7803258},
keywords = {Humanoid robots, Humanoid locomotion},
topic = {Humanoid and legged robot locomotion;Friction from vision},
url = {http://www.martimbrandao.com/papers/Brandao2016-humanoids-planning.pdf}
}

M. Brandao, K. Hashimoto, and A. Takanishi, Friction from vision: a study of algorithmic and human performance with consequences for robot perception and teleoperation, in 16th ieee-ras international conference on humanoid robots, 2016, pp. 428-435. [Abstract][BibTeX][PDF][DOI]

Friction estimation from vision is an important problem for robot locomotion through contact. The problem is challenging due to its dependence on many factors such as material, surface conditions and contact area. In this paper we 1) conduct an analysis of image features that correlate with humans’ friction judgements; and 2) compare algorithmic to human performance at the task of predicting the coefficient of friction between different surfaces and a robot’s foot. The analysis is based on two new datasets which we make publicly available. One is annotated with human judgements of friction, illumination, material and texture; the other is annotated with static coefficient of friction (COF) of a robot’s foot and human judgments of friction. We propose and evaluate visual friction prediction methods based on image features, material class and text mining. And finally, we make conclusions regarding the robustness to COF uncertainty which is necessary by control and planning algorithms; the low performance of humans at the task when compared to simple predictors based on material label; and the promising use of text mining to estimate friction from vision.

@INPROCEEDINGS{Brandao2016friction,
author = {Martim Brandao and Kenji Hashimoto and Atsuo Takanishi},
title = {Friction from Vision: A Study of Algorithmic and Human Performance
with Consequences for Robot Perception and Teleoperation},
booktitle = {16th IEEE-RAS International Conference on Humanoid Robots},
year = {2016},
pages = {428-435},
month = {Nov},
abstract = {Friction estimation from vision is an important problem for robot
locomotion through contact. The problem is challenging due to its
dependence on many factors such as material, surface conditions and contact area. In this paper we 1)
conduct an analysis of image features that correlate with humans’
friction judgements; and 2) compare algorithmic to human performance
at the task of predicting the coefficient of friction between different
surfaces and a robot’s foot. The analysis is based on two new datasets
which we make publicly available. One is annotated with human judgements
of friction, illumination, material and texture; the other is annotated
with static coefficient of friction (COF) of a robot’s foot and human
judgments of friction. We propose and evaluate visual friction prediction
methods based on image features, material class and text mining.
And finally, we make conclusions regarding the robustness to COF
uncertainty which is necessary by control and planning algorithms;
the low performance of humans at the task when compared to simple
predictors based on material label; and the promising use of text
mining to estimate friction from vision.},
doi = {10.1109/HUMANOIDS.2016.7803311},
topic = {Friction from vision},
url = {http://www.martimbrandao.com/papers/Brandao2016-humanoids-friction.pdf}
}

Energy efficiency and robustness of locomotion to different terrain conditions are important problems for humanoid robots deployed in the real world. In this paper, we propose a footstep-planning algorithm for humanoids that is applicable to flat, slanted, and slippery terrain, which uses simple principles and representations gathered from human gait literature. The planner optimizes a center-of-mass (COM) mechanical work model subject to motion feasibility and ground friction constraints using a hybrid A* search and optimization approach. Footstep placements and orientations are discrete states searched with an A* algorithm, while other relevant parameters are computed through continuous optimization on state transitions. These parameters are also inspired by human gait literature and include footstep timing (double-support and swing time) and parameterized COM motion using knee flexion angle keypoints. The planner relies on work, the required coefficient of friction (RCOF), and feasibility models that we estimate in a physics simulation. We show through simulation experiments that the proposed planner leads to both low electrical energy consumption and human-like motion on a variety of scenarios. Using the planner, the robot automatically opts between avoiding or (slowly) traversing slippery patches depending on their size and friction, and it chooses energy-optimal stairs and climbing angles in slopes. The obtained motion is also consistent with observations found in human gait literature, such as human-like changes in RCOF, step length and double-support time on slippery terrain, and human-like curved walking on steep slopes. Finally, we compare COM work minimization with other choices of the objective function.

@ARTICLE{Brandao2016tro,
author = {Martim Brandao and Kenji Hashimoto and Jos{\'e} Santos-Victor and
Atsuo Takanishi},
title = {Footstep Planning for Slippery and Slanted Terrain Using Human-Inspired
Models},
journal = {IEEE Transactions on Robotics},
year = {2016},
volume = {32},
pages = {868-879},
number = {4},
month = {Aug},
abstract = {Energy efficiency and robustness of locomotion to different terrain
conditions are important problems for humanoid robots deployed in
the real world. In this paper, we propose a footstep-planning algorithm
for humanoids that is applicable to flat, slanted, and slippery terrain,
which uses simple principles and representations gathered from human
gait literature. The planner optimizes a center-of-mass (COM) mechanical
work model subject to motion feasibility and ground friction constraints
using a hybrid A* search and optimization approach. Footstep placements
and orientations are discrete states searched with an A* algorithm,
while other relevant parameters are computed through continuous optimization
on state transitions. These parameters are also inspired by human
gait literature and include footstep timing (double-support and swing
time) and parameterized COM motion using knee flexion angle keypoints.
The planner relies on work, the required coefficient of friction
(RCOF), and feasibility models that we estimate in a physics simulation.
We show through simulation experiments that the proposed planner
leads to both low electrical energy consumption and human-like motion
on a variety of scenarios. Using the planner, the robot automatically
opts between avoiding or (slowly) traversing slippery patches depending
on their size and friction, and it chooses energy-optimal stairs
and climbing angles in slopes. The obtained motion is also consistent
with observations found in human gait literature, such as human-like
changes in RCOF, step length and double-support time on slippery
terrain, and human-like curved walking on steep slopes. Finally,
we compare COM work minimization with other choices of the objective
function.},
doi = {10.1109/TRO.2016.2581219},
issn = {1552-3098},
keywords = {Biologically-Inspired robots, Humanoid robots, Motion planning, Footstep
planning, Path planning, Human gait},
topic = {Humanoid and legged robot locomotion;Human-inspired algorithms},
url = {http://www.martimbrandao.com/papers/Brandao2016-tro.pdf}
}

Stereo confidence measures are important functions for global reconstruction methods and some applications of stereo. In this article we evaluate and compare several models of confidence which are defined at the whole disparity range. We propose a new stereo confidence measure to which we call the Histogram Sensor Model (HSM), and show how it is one of the best performing functions overall. We also introduce, for parametric models, a systematic method for estimating their parameters which is shown to lead to better performance when compared to parameters as computed in previous literature. All models were evaluated when applied to two different cost functions at different window sizes and model parameters. Contrary to previous stereo confidence measure benchmark literature, we evaluate the models with criteria important not only to winner-take-all stereo, but also to global applications. To this end, we evaluate the models on a real-world application using a recent formulation of 3D reconstruction through occupancy grids which integrates stereo confidence at all disparities. We obtain and discuss our results on both indoors’ and outdoors’ publicly available datasets.

@ARTICLE{Brandao2015tpami,
author = {Martim Brandao and Ricardo Ferreira and Kenji Hashimoto and Atsuo
Takanishi and Jos{\'e} Santos-Victor},
title = {On Stereo Confidence Measures for Global Methods: Evaluation, New
Model and Integration into Occupancy Grids},
journal = {IEEE Transactions on Pattern Analysis and Machine Intelligence},
year = {2016},
volume = {38},
pages = {116-128},
number = {1},
month = {Jan},
abstract = {Stereo confidence measures are important functions for global reconstruction
methods and some applications of stereo. In this article we evaluate
and compare several models of confidence which are defined at the
whole disparity range. We propose a new stereo confidence measure
to which we call the Histogram Sensor Model (HSM), and show how it
is one of the best performing functions overall. We also introduce,
for parametric models, a systematic method for estimating their parameters
which is shown to lead to better performance when compared to parameters
as computed in previous literature. All models were evaluated when
applied to two different cost functions at different window sizes
and model parameters. Contrary to previous stereo confidence measure
benchmark literature, we evaluate the models with criteria important
not only to winner-take-all stereo, but also to global applications.
To this end, we evaluate the models on a real-world application using
a recent formulation of 3D reconstruction through occupancy grids
which integrates stereo confidence at all disparities. We obtain
and discuss our results on both indoors’ and outdoors’ publicly available
datasets.},
doi = {10.1109/TPAMI.2015.2437381},
issn = {0162-8828},
keywords = {image reconstruction;stereo image processing;3D reconstruction;HSM;global
reconstruction methods;histogram sensor model;stereo confidence measures;Benchmark
testing;Computational modeling;Cost function;Histograms;Measurement
uncertainty;Stereo vision;Time measurement;3D reconstruction;Stereo
vision;confidence;occupancy grids;stereo matching;uncertainty},
topic = {Stereo vision and robot mapping},
url = {http://www.martimbrandao.com/papers/Brandao2015_tpami.pdf}
}

Energy consumption and stability are two important problems for humanoid robots deployed in remote outdoor locations. In this paper we propose an extended footstep planning method to optimize energy consumption while considering motion feasibility and ground friction constraints. To do this we estimate models of energy, feasibility and slippage in physics simulation, and integrate them into a hybrid A* search and optimization-based planner. The graph search is done in footstep position space, while timing (leg swing and double support times) and COM motion (parameterized height trajectory) are obtained by solving an optimization problem at each node. We conducted experiments to validate the obtained energy model on the real robot, as well as planning experiments showing 9 to 19% energy savings. In example scenarios, the robot can correctly plan to optimally traverse slippery patches or avoid them depending on their size and friction; and uses stairs with the most beneficial dimensions in terms of energy consumption.

@INPROCEEDINGS{Brandao2015humanoids,
author = {Martim Brandao and Kenji Hashimoto and Jos{\'e} Santos-Victor and
Atsuo Takanishi},
title = {Optimizing energy consumption and preventing slips at the footstep
planning level},
booktitle = {15th IEEE-RAS International Conference on Humanoid Robots},
year = {2015},
pages = {1-7},
month = {Nov},
abstract = {Energy consumption and stability are two important problems for humanoid
robots deployed in remote outdoor locations. In this paper we propose
an extended footstep planning method to optimize energy consumption
while considering motion feasibility and ground friction constraints.
To do this we estimate models of energy, feasibility and slippage
in physics simulation, and integrate them into a hybrid A* search and optimization-based
planner. The graph search is done in footstep position space, while
timing (leg swing and double support times) and COM motion (parameterized
height trajectory) are obtained by solving an optimization problem
at each node. We conducted experiments to validate the obtained energy
model on the real robot, as well as planning experiments showing
9 to 19% energy savings. In example scenarios, the robot can correctly
plan to optimally traverse slippery patches or avoid them depending
on their size and friction; and uses stairs with the most beneficial
dimensions in terms of energy consumption.},
doi = {10.1109/HUMANOIDS.2015.7363514},
keywords = {Humanoid robots, Humanoid locomotion},
topic = {Humanoid and legged robot locomotion},
url = {http://www.martimbrandao.com/papers/Brandao2015_humanoids.pdf}
}

In this paper we use an extended footstep planning algorithm to plan optimal humanoid locomotion trajectories subject to constraints on the maximum predicted Zero Moment Point (ZMP) tracking error. The approach can guarantee walking stability bounds with little extra computational burden, thus increasing safety of robots walking in challenging environments. This is done by estimating energy and stability models in simulation through Bayesian optimization, and smartly integrating the models into search-based planning.

M. Destephe, M. Brandao, T. Kishi, Massimiliano Zecca, K. Hashimoto, and A. Takanishi, Walking in the uncanny valley: importance of the attractiveness on the acceptance of a robot as a working partner, Frontiers in psychology, vol. 6, 2015. [Abstract][BibTeX][DOI]

The Uncanny valley hypothesis, which tells us that almost-human characteristics in a robot or a device could cause uneasiness in human observers, is an important research theme in the Human Robot Interaction (HRI) field. Yet, that phenomenon is still not well-understood. Many have investigated the external design of humanoid robot faces and bodies but only a few studies have focused on the influence of robot movements on our perception and feelings of the Uncanny valley. Moreover, no research has investigated the possible relation between our uneasiness feeling and whether or not we would accept robots having a job in an office, a hospital or elsewhere. To better understand the Uncanny valley, we explore several factors which might have an influence on our perception of robots, be it related to the subjects, such as culture or attitude toward robots, or related to the robot such as emotions and emotional intensity displayed in its motion. We asked 69 subjects (N = 69) to rate the motions of a humanoid robot (Perceived Humanity, Eeriness, and Attractiveness) and state where they would rather see the robot performing a task. Our results suggest that, among the factors we chose to test, the attitude toward robots is the main influence on the perception of the robot related to the Uncanny valley. Robot occupation acceptability was affected only by Attractiveness, mitigating any Uncanny valley effect. We discuss the implications of these findings for the Uncanny valley and the acceptability of a robotic worker in our society.

@ARTICLE{Destephe2015,
author = {Matthieu Destephe and Martim Brandao and Tatsuhiro Kishi and Massimiliano
Zecca and Kenji Hashimoto and Atsuo Takanishi},
title = {Walking in the uncanny valley: importance of the attractiveness on
the acceptance of a robot as a working partner},
journal = {Frontiers in Psychology},
year = {2015},
volume = {6},
month = {February},
abstract = {The Uncanny valley hypothesis, which tells us that almost-human characteristics
in a robot or a device could cause uneasiness in human observers,
is an important research theme in the Human Robot Interaction (HRI)
field. Yet, that phenomenon is still not well-understood. Many have
investigated the external design of humanoid robot faces and bodies
but only a few studies have focused on the influence of robot movements
on our perception and feelings of the Uncanny valley. Moreover, no
research has investigated the possible relation between our uneasiness
feeling and whether or not we would accept robots having a job in
an office, a hospital or elsewhere. To better understand the Uncanny
valley, we explore several factors which might have an influence
on our perception of robots, be it related to the subjects, such
as culture or attitude toward robots, or related to the robot such
as emotions and emotional intensity displayed in its motion. We asked
69 subjects (N = 69) to rate the motions of a humanoid robot (Perceived
Humanity, Eeriness, and Attractiveness) and state where they would
rather see the robot performing a task. Our results suggest that,
among the factors we chose to test, the attitude toward robots is
the main influence on the perception of the robot related to the
Uncanny valley. Robot occupation acceptability was affected only
by Attractiveness, mitigating any Uncanny valley effect. We discuss
the implications of these findings for the Uncanny valley and the
acceptability of a robotic worker in our society.},
doi = {10.3389/fpsyg.2015.00204},
keywords = {Human-Robot Interaction, Humanoid robots},
publisher = {Frontiers Media {SA}},
topic = {Human-robot interaction}
}

We present a grid-based 3D reconstruction method which integrates all costs given by stereo vision into what we call a Cost-Curve Occupancy Grid (CCOG). Occupancy probabilities of grid cells are estimated in a Bayesian formulation, from the likelihood of stereo cost measurements taken at all distance hypotheses. This is accomplished with only a small set of probabilistic assumptions which we discuss in the paper. We quantitatively characterize the method’s performance under different conditions of both image noise and number of used stereo pairs, compared also to traditional algorithms. We complement the study by giving insights on design choices of CCOGs such as likelihood model, window size of the cost function and use of a hole filling method. Experiments were made on a real-world outdoors dataset with ground-truth data.

@INPROCEEDINGS{Brandao2014iros,
author = {Martim Brandao and Ricardo Ferreira and Kenji Hashimoto and Jos{\'e}
Santos-Victor and Atsuo Takanishi},
title = {On the formulation, performance and design choices of Cost-Curve
Occupancy Grids for stereo-vision based 3D reconstruction},
booktitle = {2014 IEEE/RSJ International Conference on Intelligent Robots and
Systems},
year = {2014},
pages = {1818-1823},
month = {September},
abstract = {We present a grid-based 3D reconstruction method which integrates
all costs given by stereo vision into what we call a Cost-Curve Occupancy
Grid (CCOG). Occupancy probabilities of grid cells are estimated
in a Bayesian formulation, from the likelihood of stereo cost measurements
taken at all distance hypotheses. This is accomplished with only
a small set of probabilistic assumptions which we discuss in the
paper. We quantitatively characterize the method's performance under
different conditions of both image noise and number of used stereo
pairs, compared also to traditional algorithms. We complement the
study by giving insights on design choices of CCOGs such as likelihood
model, window size of the cost function and use of a hole filling
method. Experiments were made on a real-world outdoors dataset with
ground-truth data.},
doi = {10.1109/IROS.2014.6942801},
keywords = {Occupancy grids, Stereo vision},
topic = {Stereo vision and robot mapping},
url = {http://www.martimbrandao.com/papers/Brandao2014_iros.pdf}
}

We describe our recent developments in probabilistic modeling of 3D reconstruction with stereo vision, applied to planning strategies for locomotion and gaze. We first overview the use of probabilistic occupancy grids for 3D reconstruction, and the sensor models of stereo best suited to the problem. These grids are then used for robot navigation, which is tackled at two levels: 1) At the locomotion level, trajectories are computed from the grid using an A* search algorithm that minimizes the total probability of occupancy over the trajectory. 2) At the grid level, we propose two task-relevant active strategies which redirect the sensor to "maximum visible entropy" and "maximum visible occupancy" points along the planned locomotion trajectories. Steps 1) and 2) are executed alternately until the locomotion trajectory converges to a high certainty, safe solution. Results of the proposed gaze and locomotion planning strategies were obtained on simulated scenarios and a real robot. Estimates of the uncertainty that occupancy grids are subjected to in real outdoor scenarios were computed for different stereo sensor models. These estimates were used in active gaze simulations for an extensive comparison of gaze strategies across 400 randomly generated environments. The results show that careful modeling of stereo vision sensor uncertainty and the proposed task-relevant planning strategies lead to more complete and consequently collision-free reconstructions of the environment along planned robot trajectories.

@INPROCEEDINGS{Brandao2014mechatronics,
author = {Martim Brandao and Kenji Hashimoto and Atsuo Takanishi},
title = {Uncertainty-based mapping and planning strategies for safe navigation
of robots with stereo vision},
booktitle = {14th Mechatronics Forum Conference},
year = {2014},
pages = {80-85},
month = {June},
abstract = {We describe our recent developments in probabilistic modeling of 3D
reconstruction with stereo vision, applied to planning strategies
for locomotion and gaze. We first overview the use of probabilistic
occupancy grids for 3D reconstruction, and the sensor models of stereo
best suited to the problem. These grids are then used for robot navigation,
which is tackled at two levels: 1) At the locomotion level, trajectories are computed from
the grid using an A* search algorithm that minimizes the total probability
of occupancy over the trajectory. 2) At the grid level, we propose
two task-relevant active strategies which redirect the sensor to
"maximum visible entropy" and "maximum visible occupancy" points
along the planned locomotion trajectories. Steps 1) and 2) are executed
alternately until the locomotion trajectory converges to a high certainty,
safe solution.
Results of the proposed gaze and locomotion planning strategies were
obtained on simulated scenarios and a real robot. Estimates of the
uncertainty that occupancy grids are subjected to in real outdoor
scenarios were computed for different stereo sensor models. These
estimates were used in active gaze simulations for an extensive comparison
of gaze strategies across 400 randomly generated environments. The
results show that careful modeling of stereo vision sensor uncertainty
and the proposed task-relevant planning strategies lead to more complete
and consequently collision-free reconstructions of the environment
along planned robot trajectories.},
keywords = {Occupancy grids, Stereo vision, Active gaze},
topic = {Stereo vision and robot mapping},
url = {http://www.martimbrandao.com/papers/Brandao2014_mechatronics.pdf}
}

We propose a new biped locomotion planning method that optimizes locomotion speed subject to friction constraints. For this purpose we use approximate models of required coefficient of friction (RCOF) as a function of gait. The methodology is inspired by findings in human gait analysis, where subjects have been shown to adapt spatial and temporal variables of gait in order to reduce RCOF in slippery environments. Here we solve the friction problem similarly, by planning on gait parameter space: namely foot step placement, step swing time, double support time and height of the center of mass (COM). We first used simulations of a 48 degrees-of-freedom robot to estimate a model of how RCOF varies with these gait parameters. Then we developed a locomotion planning algorithm that minimizes the time the robot takes to reach a goal while keeping acceptable RCOF levels. Our physics simulation results show that RCOF-aware planning can drastically reduce slippage amount while still maximizing efficiency in terms of locomotion speed. Also, according to our experiments human-like stretched-knees walking can reduce slippage amount more than bent-knees (i.e. crouch) walking for the same speed.

@INPROCEEDINGS{Brandao2014humanoids,
author = {Martim Brandao and Kenji Hashimoto and Jos{\'e} Santos-Victor and
Atsuo Takanishi},
title = {Gait planning for biped locomotion on slippery terrain},
booktitle = {14th IEEE-RAS International Conference on Humanoid Robots},
year = {2014},
pages = {303-308},
month = {November},
abstract = {We propose a new biped locomotion planning method that optimizes locomotion
speed subject to friction constraints. For this purpose we use approximate
models of required coefficient of friction (RCOF) as a function of
gait. The methodology is inspired by findings in human gait analysis,
where subjects have been shown to adapt spatial and temporal variables
of gait in order to reduce RCOF in slippery environments. Here we
solve the friction problem similarly, by planning on gait parameter
space: namely foot step placement, step swing time, double support
time and height of the center of mass (COM). We first used simulations
of a 48 degrees-of-freedom robot to estimate a model of how RCOF
varies with these gait parameters. Then we developed a locomotion
planning algorithm that minimizes the time the robot takes to reach
a goal while keeping acceptable RCOF levels. Our physics simulation
results show that RCOF-aware planning can drastically reduce slippage
amount while still maximizing efficiency in terms of locomotion speed.
Also, according to our experiments human-like stretched-knees walking
can reduce slippage amount more than bent-knees (i.e. crouch) walking
for the same speed.},
doi = {10.1109/humanoids.2014.7041376},
keywords = {Humanoid robots, Humanoid locomotion},
topic = {Humanoid and legged robot locomotion;Human-inspired algorithms},
url = {http://www.martimbrandao.com/papers/Brandao2014_humanoids.pdf}
}

Humanoid robots have this formidable advantage to possess a body quite similar in shape to humans. This body grants them, obviously, locomotion but also a medium to express emotions without even needing a face. In this paper we propose to study the effects of emotional gaits from our biped humanoid robot on the subjects’ perception of the robot (recognition rate of the emotions, reaction time, anthropomorphism, safety, likeness, etc.). We made the robot walk towards the subjects with different emotional gait patterns. We assessed positive (Happy) and negative (Sad) emotional gait patterns on 26 subjects divided in two groups (whether they were familiar with robots or not). We found that even though the recognition of the different types of patterns does not differ between groups, the reaction time does. We found that emotional gait patterns affect the perception of the robot. The implications of the current results for Human Robot Interaction (HRI) are discussed.

@INPROCEEDINGS{Destephe2014,
author = {Matthieu Destephe and Martim Brandao and Tatsuhiro Kishi and Massimiliano
Zecca and Kenji Hashimoto and Atsuo Takanishi},
title = {Emotional Gait: Effects on Humans’ Perception of Humanoid Robots},
booktitle = {23rd IEEE International Symposium on Robot and Human Interactive
Communication},
year = {2014},
pages = {261-266},
month = {August},
abstract = {Humanoid robots have this formidable advantage to possess a body quite
similar in shape to humans. This body grants them, obviously, locomotion
but also a medium to express emotions without even needing a face.
In this paper we propose to study the effects of emotional gaits
from our biped humanoid robot on the subjects’ perception of the
robot (recognition rate of the emotions, reaction time, anthropomorphism,
safety, likeness, etc.). We made the robot walk towards the subjects
with different emotional gait patterns. We assessed positive (Happy)
and negative (Sad) emotional gait patterns on 26 subjects divided
in two groups (whether they were familiar with robots or not). We
found that even though the recognition of the different types of
patterns does not differ between groups, the reaction time does.
We found that emotional gait patterns affect the perception of the
robot. The implications of the current results for Human Robot Interaction
(HRI) are discussed.},
doi = {10.1109/roman.2014.6926263},
keywords = {Human-Robot Interaction, Humanoid robots},
topic = {Human-robot interaction}
}

We describe a learning strategy that allows a humanoid robot to autonomously build a representation of its workspace: we call this representation Reachable Space Map. Interestingly, the robot can use this map to: (i) estimate the Reachability of a visually detected object (i.e. judge whether the object can be reached for, and how well, according to some performance metric) and (ii) modify its body posture or its position with respect to the object to achieve better reaching. The robot learns this map incrementally during the execution of goal-directed reaching movements; reaching control employs kinematic models that are updated online as well. Our solution is innovative with respect to previous works in three aspects: the robot workspace is described using a gaze-centered motor representation, the map is built incrementally during the execution of goal-directed actions, learning is autonomous and online. We implement our strategy on the 48-DOFs humanoid robot Kobian and we show how the Reachable Space Map can support intelligent reaching behavior with the whole-body (i.e. head, eyes, arm, waist, legs).

@ARTICLE{Jamone2014,
author = {Lorenzo Jamone and Martim Brandao and Lorenzo Natale and Kenji Hashimoto
and Giulio Sandini and Atsuo Takanishi},
title = {Autonomous online generation of a motor representation of the workspace
for intelligent whole-body reaching},
journal = {Robotics and Autonomous Systems },
year = {2014},
volume = {62},
pages = {556-567},
number = {4},
abstract = {We describe a learning strategy that allows a humanoid robot to autonomously
build a representation of its workspace: we call this representation
Reachable Space Map. Interestingly, the robot can use this map to:
(i) estimate the Reachability of a visually detected object (i.e.
judge whether the object can be reached for, and how well, according
to some performance metric) and (ii) modify its body posture or its
position with respect to the object to achieve better reaching. The
robot learns this map incrementally during the execution of goal-directed
reaching movements; reaching control employs kinematic models that
are updated online as well. Our solution is innovative with respect
to previous works in three aspects: the robot workspace is described
using a gaze-centered motor representation, the map is built incrementally
during the execution of goal-directed actions, learning is autonomous
and online. We implement our strategy on the 48-DOFs humanoid robot
Kobian and we show how the Reachable Space Map can support intelligent
reaching behavior with the whole-body (i.e. head, eyes, arm, waist,
legs).},
doi = {10.1016/j.robot.2013.12.011},
issn = {0921-8890},
keywords = {Whole-body reaching, Humanoid robots},
topic = {Human-inspired algorithms}
}

We present a novel control architecture for the integration of visually guided walking and whole-body reaching in a humanoid robot.We propose to use robot gaze as a common reference frame for both locomotion and reaching, as suggested by behavioral neuroscience studies in humans. A gaze controller allows the robot to track and fixate a target object, and motor information related to gaze control is then used to i) estimate the reachability of the target, ii) steer locomotion, iii) control whole-body reaching. The reachability is a measure of how well the object can be reached for, depending on the position and posture of the robot with respect to the target, and it is obtained from the gaze motor information using a mapping that has been learned autonomously by the robot through motor experience: we call this mapping Reachable Space Map. In our approach, both locomotion and whole-body movements are seen as ways to maximize the reachability of a visually detected object, thus i) expanding the robot workspace to the entire visible space and ii) exploiting the robot redundancy to optimize reaching. We implement our method on a full 48-DOF humanoid robot and provide experimental results in the real world.

@INPROCEEDINGS{Brandao2013humanoids,
author = {Martim Brandao and Lorenzo Jamone and Przemyslaw Kryczka and Nobotsuna
Endo and Kenji Hashimoto and Atsuo Takanishi},
title = {Reaching for the unreachable: integration of locomotion and whole-body
movements for extended visually guided reaching},
booktitle = {13th IEEE-RAS International Conference on Humanoid Robots (Humanoids)},
year = {2013},
pages = {28-33},
month = {October},
abstract = {We present a novel control architecture for the integration of visually
guided walking and whole-body reaching in a humanoid robot.We propose
to use robot gaze as a common reference frame for both locomotion
and reaching, as suggested by behavioral neuroscience studies in
humans. A gaze controller allows the robot to track and fixate a
target object, and motor information related to gaze control is then
used to i) estimate the reachability of the target, ii) steer locomotion,
iii) control whole-body reaching. The reachability is a measure of
how well the object can be reached for, depending on the position
and posture of the robot with respect to the target, and it is obtained
from the gaze motor information using a mapping that has been learned
autonomously by the robot through motor experience: we call this
mapping Reachable Space Map. In our approach, both locomotion and
whole-body movements are seen as ways to maximize the reachability
of a visually detected object, thus i) expanding the robot workspace
to the entire visible space and ii) exploiting the robot redundancy
to optimize reaching. We implement our method on a full 48-DOF humanoid
robot and provide experimental results in the real world.},
doi = {10.1109/HUMANOIDS.2013.7029951},
issn = {2164-0572},
keywords = {Visual tracking, Whole-body reaching, Humanoid robots, Humanoid locomotion},
topic = {Humanoid and legged robot locomotion;Human-inspired algorithms},
url = {http://www.martimbrandao.com/papers/Brandao2013_humanoids.pdf}
}

Robots depend on a world map representation in order to navigate on it. Only a part of the space around the agent can be sensed at each time and so measures must be taken in order to reduce the uncertainty of this map and likelihood of collision. In this work we propose the use of a probabilistic occupancy grid to guide active gaze of the robot on the “walk to target” task. A map uncertainty measure is proposed, as is a method for choosing gaze points along the robot’s computed trajectory to anticipate the need for trajectory changes. Gaze points are chosen from the whole space volume the robot will traverse. Then, robot trajectories are computed directly on the probabilistic map in order to drive the robot towards free-space areas of high confidence. A preliminary evaluation of the approach is done on a real scenario using the humanoid robot KOBIAN for the preparatory gaze exploration task necessary for safe trajectory planning to a target.

@INPROCEEDINGS{Brandao2013isrm,
author = {Martim Brandao and Ricardo Ferreira and Kenji Hashimoto and Jos{\'e}
Santos-Victor and Atsuo Takanishi},
title = {Active Gaze Strategy for Reducing Map Uncertainty along a Path},
booktitle = {3rd IFToMM International Symposium on Robotics and Mechatronics},
year = {2013},
pages = {455-466},
month = {October},
abstract = {Robots depend on a world map representation in order to navigate on
it. Only a part of the space around the agent can be sensed at each
time and so measures must be taken in order to reduce the uncertainty
of this map and likelihood of collision. In this work we propose
the use of a probabilistic occupancy grid to guide active gaze of
the robot on the “walk to target” task. A map uncertainty measure
is proposed, as is a method for choosing gaze points along the robot’s
computed trajectory to anticipate the need for trajectory changes.
Gaze points are chosen from the whole space volume the robot will
traverse. Then, robot trajectories are computed directly on the probabilistic
map in order to drive the robot towards free-space areas of high
confidence. A preliminary evaluation of the approach is done on a
real scenario using the humanoid robot KOBIAN for the preparatory
gaze exploration task necessary for safe trajectory planning to a
target.},
keywords = {Occupancy grids, Active gaze},
topic = {Stereo vision and robot mapping},
url = {http://www.martimbrandao.com/papers/Brandao2013_isrm.pdf}
}

Extensive literature has been written on occupancy grid mapping for different sensors. When stereo vision is applied to the occupancy grid framework it is common, however, to use sensor models that were originally conceived for other sensors such as sonar. Although sonar provides a distance to the nearest obstacle for several directions, stereo has confidence measures available for each distance along each direction. The common approach is to take the highestconfidence distance as the correct one, but such an approach disregards mismatch errors inherent to stereo. In this work, stereo confidence measures of the whole sensed space are explicitly integrated into 3D grids using a new occupancy grid formulation. Confidence measures themselves are used to model uncertainty and their parameters are computed automatically in a maximum likelihood approach. The proposed methodology was evaluated in both simulation and a realworld outdoor dataset which is publicly available. Mapping performance of our approach was compared with a traditional approach and shown to achieve less errors in the reconstruction.

@INPROCEEDINGS{Brandao2013iros,
author = {Martim Brandao and Ricardo Ferreira and Kenji Hashimoto and Jos{\'e}
Santos-Victor and Atsuo Takanishi},
title = {Integrating the whole cost-curve of stereo into occupancy grids},
booktitle = {2013 IEEE/RSJ International Conference on Intelligent Robots and
Systems},
year = {2013},
pages = {4681-4686},
month = {November},
abstract = {Extensive literature has been written on occupancy grid mapping for
different sensors. When stereo vision is applied to the occupancy
grid framework it is common, however, to use sensor models that were
originally conceived for other sensors such as sonar. Although sonar
provides a distance to the nearest obstacle for several directions,
stereo has confidence measures available for each distance along
each direction. The common approach is to take the highestconfidence
distance as the correct one, but such an approach disregards mismatch
errors inherent to stereo.
In this work, stereo confidence measures of the whole sensed space
are explicitly integrated into 3D grids using a new occupancy grid
formulation. Confidence measures themselves are used to model uncertainty
and their parameters are computed automatically in a maximum likelihood
approach. The proposed methodology was evaluated in both simulation
and a realworld outdoor dataset which is publicly available. Mapping
performance of our approach was compared with a traditional approach
and shown to achieve less errors in the reconstruction.},
doi = {10.1109/IROS.2013.6697030},
ee = {http://dx.doi.org/10.1109/IROS.2013.6697030},
issn = {2153-0858},
keywords = {Occupancy grids, Stereo vision},
topic = {Stereo vision and robot mapping},
url = {http://www.martimbrandao.com/papers/Brandao2013_iros.pdf}
}

Humanoid robots are complex sensorimotor systems where the existence of internal models are of utmost importance both for control purposes and for predicting the changes in the world arising from the system’s own actions. This so-called expected perception relies on the existence of accurate internal models of the robot’s sensorimotor chains. We assume that the kinematic model is known in advance but that the absolute offsets of the different axes cannot be directly retrieved from encoders. We propose a method to estimate such parameters, the zero position of the joints of a humanoid robotic head, by relying on proprioceptive sensors such as relative encoders, inertial sensing and visual input. We show that our method can estimate the correct offsets of the different joints (i.e. absolute positioning) in a continuous, online manner. Not only the method is robust to noise but it can as well cope with and adjust to abrupt changes in the parameters. Experiments with three different robotic heads are presented and illustrate the performance of the methodology as well as the advantages of using such an approach.

@INPROCEEDINGS{Moutinho2012,
author = {Nuno Moutinho and Martim Brandao and Ricardo Ferreira and Jos{\'e}
Ant{\'o}nio Gaspar and Alexandre Bernardino and Atsuo Takanishi and
Jos{\'e} Santos-Victor},
title = {Online calibration of a humanoid robot head from relative encoders,
IMU readings and visual data},
booktitle = {2012 IEEE/RSJ International Conference on Intelligent Robots and
Systems},
year = {2012},
pages = {2070-2075},
month = {October},
abstract = {Humanoid robots are complex sensorimotor systems where the existence
of internal models are of utmost importance both for control purposes
and for predicting the changes in the world arising from the system’s
own actions. This so-called expected perception relies on the existence
of accurate internal models of the robot’s sensorimotor chains.
We assume that the kinematic model is known in advance but that the
absolute offsets of the different axes cannot be directly retrieved
from encoders. We propose a method to estimate such parameters, the
zero position of the joints of a humanoid robotic head, by relying
on proprioceptive sensors such as relative encoders, inertial sensing
and visual input.
We show that our method can estimate the correct offsets of the different
joints (i.e. absolute positioning) in a continuous, online manner.
Not only the method is robust to noise but it can as well cope with
and adjust to abrupt changes in the parameters. Experiments with
three different robotic heads are presented and illustrate the performance
of the methodology as well as the advantages of using such an approach.},
doi = {10.1109/IROS.2012.6386162},
issn = {2153-0858},
keywords = {Visual tracking, Humanoid robots},
topic = {Robot design and calibration}
}

Tracking an object’s 3D position and orientation from a color image can been accomplished with particle filters if its color and shape properties are known. Unfortunately, initialization in particle filters is often manual or random, thus rendering the tracking recovery process slow or no longer autonomous. A method that uses image data to generate likely pose hypotheses for known objects is proposed. These generated pose hypotheses are then used to guide visual attention and computer resources in a “top-down” tracking system such as a particle filter: speeding up the tracking process and making it more robust to unpredictable movement.

@INPROCEEDINGS{Brandao2011,
author = {Martim Brandao and Alexandre Bernardino and Jos{\'e} Santos-Victor},
title = {Image Driven Generation of Pose Hypotheses for 3D Model-based Tracking},
booktitle = {12th IAPR Conference on Machine Vision Applications},
year = {2011},
pages = {59-62},
month = {June},
abstract = {Tracking an object's 3D position and orientation from a color image
can been accomplished with particle filters if its color and shape
properties are known. Unfortunately, initialization in particle filters
is often manual or random, thus rendering the tracking recovery process
slow or no longer autonomous. A method that uses image data to generate
likely pose hypotheses for known objects is proposed. These generated
pose hypotheses are then used to guide visual attention and computer
resources in a “top-down” tracking system such as a particle filter:
speeding up the tracking process and making it more robust to unpredictable
movement.},
keywords = {Visual tracking},
topic = {Visual tracking},
url = {http://www.martimbrandao.com/papers/Brandao2011_mva.pdf}
}

2010

Tracking an object’s 3D pose from a color image can been accomplished with particle filters if its color and shape properties are known a priori. Unfortunately, initialization in particle filters is often manual or random, thus rendering the tracking recovery process slow or no longer autonomous. A method that uses existing object information to better decide on where to automatically start or recover the tracking process is proposed. Each 3D pose of an object is observed as a 2D shape and so training is made to infer pose from image information. The object is first segmented through color, then shape description is made using geometric moments and finally a learning stage maps 2D shapes to 3D poses with an associated likelihood measure.

@INPROCEEDINGS{Brandao2010,
author = {Martim Brandao and Alexandre Bernardino},
title = {Generating pose hypotheses for 3D tracking: a bottom-up approach},
booktitle = {16th Portuguese Conference on Pattern Recognition},
year = {2010},
month = {October},
abstract = {Tracking an object's 3D pose from a color image can been accomplished
with particle filters if its color and shape properties are known
a priori. Unfortunately, initialization in particle filters is often
manual or random, thus rendering the tracking recovery process slow
or no longer autonomous. A method that uses existing object information
to better decide on where to automatically start or recover the tracking
process is proposed. Each 3D pose of an object is observed as a 2D
shape and so training is made to infer pose from image information.
The object is first segmented through color, then shape description
is made using geometric moments and finally a learning stage maps
2D shapes to 3D poses with an associated likelihood measure.},
keywords = {Visual tracking},
topic = {Visual tracking}
}