Adding joint limits to the Joint description in KDL

This is a small patch to the Joint class of KDL to add storage of joint
limits. We hope to use them in some new inverse kinematic solvers, and Ruben
Smits (sitting next to me at a summer school) suggested this change.

This message contains a _proposal_ (that can be discussed!) about the
set of object classes that I think KDL needs to have (in the medium term)
to achieve its goal of being able to represent, to specify, to compute, to
simulate and to control _all_ kinds of robotic devices, in realtime if
desired. I know that this proposal is not yet "perfect", so please be
critical and creative in your reactions! :-)

The design considerations behind this suggestion are:
- optimal decoupling: trying to design libraries that are as small as
possible and still create added value.
- coping with the reality that different people/applications require
different mathematical and/or numerical representations, as well as
different algorithms to solve the same problems.
- lumped-parameter systems: KDL will be able to work with all mechanical
structures that consist of ideal rigid body links and joints, coupled
with localized stiffnesses and damping.
- serial, tree and graph structures will be supported.
- SOA: eventually, KDL should be able to be used in a "service oriented
architecture", where each "client" can request services with customized
quality of service. The most visible results of this requirement are that
(i) the "state" of a kinematic/dynamic system must be a class on its own,
and (ii) KDL distinguishes between solvers, simulations and controllers,
as three semantically different kind of kinematics & dynamics algorithm
"services".

- Chain:
- represents Joints and Segments:
- a Segment is a Body with a number of named reference Frames (=
locations to place Joints).
- a Joint represents the (physical!) motion constraints between
Segments
- Joints are located at the named Frames of a Segment.
- depends on Body.

- (Chain)State:
- represents physical state of Chain: F, X, \dot{X}, \ddot{X},
\tau, q, \dot{q}, \ddot{q}.
- depends on Body and Chain.
- is kept separate from Chain, because Chain should be useable without
state information.

- (Instantaneous)Motion:
- represents (artificial!) instantaneous constraints on a Chain, to make it
move.
- specifies motion by adding virtual "joints" to a physical
Chain and to specify desired motions of those joints.
- (hence) introduces extra Chain and State objects with which to specify
the motion task.
- depends on Body, Chain and State.

- Solver:
- transforms an in-State of a Chain into an out-State, taking into
account the desired Motion task.
- depends on Body, Chain, State and Motion.

- Controller:
- is a special case of Solver (an interactive one)
- adds extra control error states.
- adds Commands (to support Service-Oriented Architecture).
- adds DataPorts (to support asynchronous interaction with "environment").
- (hence) adds Activity to a Simulation.
- can be implemented in all various types of Components as defined, for
example, in the CORBA Component Model.
- a realtime Controller is requiress realtime activity to schedule its
Solver computations.

Since we want to use Solvers in realtime now and them, all algorithms
should be "realtime safe". Simply put: no dynamic memory allocation! Even
simpler put: allocate all temporary variables before you call a solver.

So, please comment on how well or bad this "roadmap" fits with yours, and
where your priorities lie. Roughly said, we will prefer to cooperate on
contributions that fit into this roadmap :-)

Off all things, this is a burden and PITA. Choose double, it has been shown
before that float is in the end numerically insufficient. IF someone really
needs float for a deeply embedded device, he can just compile the library
with #define double float and have the burden of maintaining this
implementation. Trust me on this, It won't be 1% of your users. Even embedded
devices tend to grow larger every year.

[...]
>
> Since we want to use Solvers in realtime now and them, all algorithms
> should be "realtime safe". Simply put: no dynamic memory allocation! Even
> simpler put: allocate all temporary variables before you call a solver.

See my previous mail about this. If you disallow malloc, you'll be throwing
away most algorithms used today. You can allow mallocs as long as they are
not a fundamental part of your library and just the property of a specific
implementation.

For the rest of the proposal, I can actually understand most of it and I find
the names well chosen. It has clearly potential.

Not that I feel very strongly about only implementing a double version to
start with, but do you have proof for this statement? Most robots use only
10-12 bits of position measurement accuracy from their encoders anyway...

> IF someone really
> needs float for a deeply embedded device, he can just compile the library
> with #define double float and have the burden of maintaining this
> implementation. Trust me on this, It won't be 1% of your users. Even embedded
> devices tend to grow larger every year.
>
> [...]
>>
>> Since we want to use Solvers in realtime now and them, all algorithms
>> should be "realtime safe". Simply put: no dynamic memory allocation! Even
>> simpler put: allocate all temporary variables before you call a solver.
>
> See my previous mail about this. If you disallow malloc, you'll be throwing
> away most algorithms used today.
Not the ones we need, which is basically a good plain efficient SVD.
And these can even be the FORTRAN ones that have proven their value since
decades.
Again, I put the ball back in your camp: what do we really miss by strictly
forbidding memory allocations in realtime?

> You can allow mallocs as long as they are
> not a fundamental part of your library and just the property of a specific
> implementation.
I don't understand this sentence... And after all, realtime _is_ a fundamental
part of the orocos project, isn't it?

> For the rest of the proposal, I can actually understand most of it and I find
> the names well chosen. It has clearly potential.
...but it will need a lot of work to realize that potential... :-)

On the other hand, I am preparing a quite similar "technical white paper"
for BFL (and for the not yet existing Systems & Control Library) that can
both be structured along the same lines...

On Tuesday 12 August 2008 13:43:27 Herman Bruyninckx wrote:
> On Tue, 12 Aug 2008, Peter Soetens wrote:
> > On Wednesday 23 July 2008 15:43:40 Herman Bruyninckx wrote:
> >> For all its classes, KDL will have two complementary APIs:
> >> - Semantic: reflects the physics, so doesn't show mathematical
> >> representation ("coordinates"), and doesn't show the dimension of the
> >> space (2D, 3D, 6D). Mostly used for symbolic preprocessing and during
> >> configuration and deployment.
> >> - Mathematical:
> >> - coordinate representations of physics (vector coordinates, matrix
> >> coordinates, ...).
> >> - offers transformations between mathematical representations:
> >> - between coordinate reference frames.
> >> - between physical units, e.g, radians - degrees, meters - inches,
> >> ... - between coordinate representation choices (e.g., quaternions,
> >> Euler angles, place of angular components vs linear components in 6D. -
> >> between computer hardware representation choices: float, double, ...
> >
> > Off all things, this is a burden and PITA. Choose double, it has been
> > shown before that float is in the end numerically insufficient.
>
> Not that I feel very strongly about only implementing a double version to
> start with, but do you have proof for this statement? Most robots use only
> 10-12 bits of position measurement accuracy from their encoders anyway...

I believe I had this information (a long time ago) from Numerical Recipes in
C. It was stated that for any scientific purpose, double precision should be
used when available. From a performance point-of-view, there is also very
little difference (only more memory is used, not more/slower instructions).

>
> > IF someone really
> > needs float for a deeply embedded device, he can just compile the library
> > with #define double float and have the burden of maintaining this
> > implementation. Trust me on this, It won't be 1% of your users. Even
> > embedded devices tend to grow larger every year.
> >
> > [...]
> >
> >> Since we want to use Solvers in realtime now and them, all algorithms
> >> should be "realtime safe". Simply put: no dynamic memory allocation!
> >> Even simpler put: allocate all temporary variables before you call a
> >> solver.
> >
> > See my previous mail about this. If you disallow malloc, you'll be
> > throwing away most algorithms used today.
>
> Not the ones we need, which is basically a good plain efficient SVD.
> And these can even be the FORTRAN ones that have proven their value since
> decades.
> Again, I put the ball back in your camp: what do we really miss by strictly
> forbidding memory allocations in realtime?

First, you're assuming that every solver will be used in real-time, lot's of
applications don't require this. Second, any algorithm written at present
with mallocs would be harder to port, lowering your 'critical mass'.

>
> > You can allow mallocs as long as they are
> > not a fundamental part of your library and just the property of a
> > specific implementation.
>
> I don't understand this sentence... And after all, realtime _is_ a
> fundamental part of the orocos project, isn't it?

Let me reformulate: Don't 'inject' mallocs in the standard ('common')
infrastructure of KDL. But allow an algorithm to use whatever it wants, just
document its 'performance quality'.

>>> [...]
>>>> Since we want to use Solvers in realtime now and them, all algorithms
>>>> should be "realtime safe". Simply put: no dynamic memory allocation!
>>>> Even simpler put: allocate all temporary variables before you call a
>>>> solver.
>>>
>>> See my previous mail about this. If you disallow malloc, you'll be
>>> throwing away most algorithms used today.
>>
>> Not the ones we need, which is basically a good plain efficient SVD.
>> And these can even be the FORTRAN ones that have proven their value since
>> decades.
>> Again, I put the ball back in your camp: what do we really miss by strictly
>> forbidding memory allocations in realtime?
>
> First, you're assuming that every solver will be used in real-time, lot's of
> applications don't require this.

I do not assume this. But I agree that I haven't made this sufficiently
clear in previous messages :-)

> Second, any algorithm written at present
> with mallocs would be harder to port, lowering your 'critical mass'.
Again: what do we loose _concretely_ with these "algorithm written
at present with mallocs"...? I know that BFL has some legacy of matrix
libraries that use mallocs, but that is not a fundamental legacy, just a
result of not using the realtime safe libraries because these were not in
C++...

>>> You can allow mallocs as long as they are
>>> not a fundamental part of your library and just the property of a
>>> specific implementation.
>>
>> I don't understand this sentence... And after all, realtime _is_ a
>> fundamental part of the orocos project, isn't it?
>
> Let me reformulate: Don't 'inject' mallocs in the standard ('common')
> infrastructure of KDL. But allow an algorithm to use whatever it wants, just
> document its 'performance quality'.
I agree! Do you have any suggestions about how to do this "just
documenting" in a consistent, easy, Right Way...? (I don't, for the
moment...)

>> >
>> > Off all things, this is a burden and PITA. Choose double, it has been
>> > shown before that float is in the end numerically insufficient.
>>
>> Not that I feel very strongly about only implementing a double version to
>> start with, but do you have proof for this statement? Most robots use only
>> 10-12 bits of position measurement accuracy from their encoders anyway...
>
>I believe I had this information (a long time ago) from Numerical Recipes in
>C. It was stated that for any scientific purpose, double precision should be
>used when available. From a performance point-of-view, there is also very
>little difference (only more memory is used, not more/slower instructions).

Agree with Peter. From the embedded point of view, float computations can be _slower_ than double. Might be counter-intuitive, until you look at the disassembled code and realise how many conversions are made from float-to-double and back. Many (most?) floating-point units operate on double's only, internally. YMMV, but I've seen a couple of instances of this.

>>>> Off all things, this is a burden and PITA. Choose double, it has been
>>>> shown before that float is in the end numerically insufficient.
>>>
>>> Not that I feel very strongly about only implementing a double version to
>>> start with, but do you have proof for this statement? Most robots use only
>>> 10-12 bits of position measurement accuracy from their encoders anyway...
>>
>> I believe I had this information (a long time ago) from Numerical Recipes in
>> C.
NRC is not at all(!) an authority in numerical circles...!

> It was stated that for any scientific purpose, double precision should be
>> used when available. From a performance point-of-view, there is also very
>> little difference (only more memory is used, not more/slower instructions).
>
> Agree with Peter. From the embedded point of view, float computations can be _slower_ than double. Might be counter-intuitive, until you look at the disassembled code and realise how many conversions are made from float-to-double and back. Many (most?) floating-point units operate on double's only, internally. YMMV, but I've seen a couple of instances of this.

When I talk about "float", I talk about using floats consistently, and not
only somewhere in your computations :-)

But, I repeat that I have nothing serious against a "double only"
implementation; I just don't want to do things that lock us in into "double
only"...

On Tuesday 12 August 2008 17:07:01 Herman Bruyninckx wrote:
> On Tue, 12 Aug 2008, S Roderick wrote:
> >>>> Off all things, this is a burden and PITA. Choose double, it has been
> >>>> shown before that float is in the end numerically insufficient.
> >>>
> >>> Not that I feel very strongly about only implementing a double version
> >>> to start with, but do you have proof for this statement? Most robots
> >>> use only 10-12 bits of position measurement accuracy from their
> >>> encoders anyway...
> >>
> >> I believe I had this information (a long time ago) from Numerical
> >> Recipes in C.
>
> NRC is not at all(!) an authority in numerical circles...!

It's not because it has received some criticism that it's full of crap. I'm
sure there are more sources which confirm the double vs float argument but if
you want to learn about them, feel free to find them yourselves :-)

I agree with the proposed set of object classes as I think it will make the
addition of new features - dynamics, controllers etc. cleaner.

I'm willing to re-factor my implementation of inverse dynamics that I had sent
in an earlier email (based on iterative newton euler as presented in the book
Robot Modeling and Control by Spong, Hutchinson and Vidyasagra) to both
make it real-time safe and fit with the proposed structure.

I also think it would be nice to see the proposal as actual C++ header files as
this would expose some of the finer details, such as how exactly joint
limits and
other constraints will be handled.

Advait

--
Willow Garage Inc.
(http://www.willowgarage.com)

On Wed, Jul 23, 2008 at 6:43 AM, Herman Bruyninckx
<Herman [dot] Bruyninckx [..] ...> wrote:
> Dear KDL friends,
>
> This message contains a _proposal_ (that can be discussed!) about the
> set of object classes that I think KDL needs to have (in the medium term)
> to achieve its goal of being able to represent, to specify, to compute, to
> simulate and to control _all_ kinds of robotic devices, in realtime if
> desired. I know that this proposal is not yet "perfect", so please be
> critical and creative in your reactions! :-)
>
> The design considerations behind this suggestion are:
> - optimal decoupling: trying to design libraries that are as small as
> possible and still create added value.
> - coping with the reality that different people/applications require
> different mathematical and/or numerical representations, as well as
> different algorithms to solve the same problems.
> - lumped-parameter systems: KDL will be able to work with all mechanical
> structures that consist of ideal rigid body links and joints, coupled
> with localized stiffnesses and damping.
> - serial, tree and graph structures will be supported.
> - SOA: eventually, KDL should be able to be used in a "service oriented
> architecture", where each "client" can request services with customized
> quality of service. The most visible results of this requirement are that
> (i) the "state" of a kinematic/dynamic system must be a class on its own,
> and (ii) KDL distinguishes between solvers, simulations and controllers,
> as three semantically different kind of kinematics & dynamics algorithm
> "services".
>
> For all its classes, KDL will have two complementary APIs:
> - Semantic: reflects the physics, so doesn't show mathematical
> representation ("coordinates"), and doesn't show the dimension of the
> space (2D, 3D, 6D). Mostly used for symbolic preprocessing and during
> configuration and deployment.
> - Mathematical:
> - coordinate representations of physics (vector coordinates, matrix
> coordinates, ...).
> - offers transformations between mathematical representations:
> - between coordinate reference frames.
> - between physical units, e.g, radians - degrees, meters - inches, ...
> - between coordinate representation choices (e.g., quaternions, Euler
> angles, place of angular components vs linear components in 6D.
> - between computer hardware representation choices: float, double, ...
>
> So, here is a brief description of the suggested classes:
> - Body ("Body" at semantic level, "Frame" at mathematical level):
> - represents pose and instantaneous motion (velocity, acceleration) of
> rigid body.
> - represents forces on rigid body.
> - represents physical force/motion mappings: Stiffness (position -
> force), Damping (velocity -force), Inertia (acceleration - force).
>
> - Chain:
> - represents Joints and Segments:
> - a Segment is a Body with a number of named reference Frames (=
> locations to place Joints).
> - a Joint represents the (physical!) motion constraints between
> Segments
> - Joints are located at the named Frames of a Segment.
> - depends on Body.
>
> - (Chain)State:
> - represents physical state of Chain: F, X, \dot{X}, \ddot{X},
> \tau, q, \dot{q}, \ddot{q}.
> - depends on Body and Chain.
> - is kept separate from Chain, because Chain should be useable without
> state information.
>
> - (Instantaneous)Motion:
> - represents (artificial!) instantaneous constraints on a Chain, to make it
> move.
> - specifies motion by adding virtual "joints" to a physical
> Chain and to specify desired motions of those joints.
> - (hence) introduces extra Chain and State objects with which to specify
> the motion task.
> - depends on Body, Chain and State.
>
> - Solver:
> - transforms an in-State of a Chain into an out-State, taking into
> account the desired Motion task.
> - depends on Body, Chain, State and Motion.
>
> - Simulation:
> - adds time integration.
> - adds Events and FSM, for scheduling of (or, switching between) Solvers
> (all previous classes only need Methods and Properties, including
> their Configuration methods).
> - depends on Body, Chain, State, Motion and Solver.
>
> - Controller:
> - is a special case of Solver (an interactive one)
> - adds extra control error states.
> - adds Commands (to support Service-Oriented Architecture).
> - adds DataPorts (to support asynchronous interaction with "environment").
> - (hence) adds Activity to a Simulation.
> - can be implemented in all various types of Components as defined, for
> example, in the CORBA Component Model.
> - a realtime Controller is requiress realtime activity to schedule its
> Solver computations.
>
> Since we want to use Solvers in realtime now and them, all algorithms
> should be "realtime safe". Simply put: no dynamic memory allocation! Even
> simpler put: allocate all temporary variables before you call a solver.
>
> So, please comment on how well or bad this "roadmap" fits with yours, and
> where your priorities lie. Roughly said, we will prefer to cooperate on
> contributions that fit into this roadmap :-)
>
> Herman
> --
> Orocos-Dev mailing list
> Orocos-Dev [..] ...
> http://lists.mech.kuleuven.be/mailman/listinfo/orocos-dev
>
> Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm
>
>

> I agree with the proposed set of object classes as I think it will make the
> addition of new features - dynamics, controllers etc. cleaner.
>
> I'm willing to re-factor my implementation of inverse dynamics that I had sent
> in an earlier email (based on iterative newton euler as presented in the book
> Robot Modeling and Control by Spong, Hutchinson and Vidyasagra) to both
> make it real-time safe and fit with the proposed structure.

Good, thanks!

Maybe I looked at the wrong place, but in your code I don't see where you
adapt the inertia during an iteration over the joints... The inertia than
joint i feels must be a function of the inertias of all more distal
joints...

> I also think it would be nice to see the proposal as actual C++ header
> files as this would expose some of the finer details, such as how exactly
> joint limits and other constraints will be handled.
>
This is the most important next step to make, and therefore I would like to
do it in an open way, where everybody can have his/her say :-)

Roughly, the API will look like this (with the example of inverse dynamics,
but it generalizes simply to forward dynamics and forward/inverse
kinematics too):

OutState = InverseDynamics (Chain, Solver, Motion, InState);

where Chain, Solver and Motion all have their own states too. The OutState
and InState depend on the particular Solver, Chain and Motion.

In the case of your ID code, the InState was the current joint position and
joint velocity, the Motion was the desired joint acceleration, and the
OutState was the joint torque needed to drive the chain.

In the case of constraint-based motion specification for a humanoid robot,
the InState is the current joint position and joint velocity, the Motion
is a set of (acceleration-level) constraints on Cartesian "task frames"
attached to the robot (possibly with priorities or weights), and the
OutState are the joint torques, or the joint velocities or acceleration
(depending on the kind of solver).

As you see, the same simple API covers a lot of cases, so our design should
be the simplest possible that can cover all these cases.

Maybe the InState and OutState arguments could be replaced by a "State"
object, since it contains anyway Chain, Solver and Motion dependent
sub-states... This would give something like:

ErrorCode = InverseDynamics (Chain, Solver, Motion, State);

Note that this is the _abstract_ ("semantic") interface; a concrete
implementation would have a name like
"InverseDynamics_SpongHutchinsonVidyasagra"
(although I think that book is not the original reference of that
particular solver).

A side note: "Newton_Euler" or "Lagrange" is not useful information, since
the recursive techniques that are derived in both frameworks are the same :-)
See
@Article{ Silver82,
author = {Silver, D. B.},
title = {On the equivalence of {L}agrangian and {N}ewton-{E}uler
dynamics for manipulators},
journal = {The International Journal of Robotics Research},
year = {1982},
volume = {1},
number = {2},
pages = {60--70}
}
(Free download till the end of the month...)

As a summary: the classes for Motion, Solver and State are still
practically nowhere; Chain is already okay to some extent.

Ruben (and some others, including myself) would give priority to a damped
least-squares inverse kinematics (or dynamics, if you include the inertia
as weighting matrix), for a generic serial and tree structure, including a
"CLIK" solver (Closed-Loop Inverse Kinematics). See e.g..

On Wed, Jul 23, 2008 at 11:52 PM, Herman Bruyninckx
<Herman [dot] Bruyninckx [..] ...> wrote:
>
> Maybe I looked at the wrong place, but in your code I don't see where you
> adapt the inertia during an iteration over the joints... The inertia than
> joint i feels must be a function of the inertias of all more distal
> joints...
>

In this method we solve for the constraint forces and the joint
toques. The effect
of the inertias of the distal joints is reflected in the constraint
forces. The backward
recursion (solving for the forces) starts from the last link and goes
up the chain.

>
> A side note: "Newton_Euler" or "Lagrange" is not useful information, since
> the recursive techniques that are derived in both frameworks are the same
> :-)
> See @Article{ Silver82,
> author = {Silver, D. B.},
> title = {On the equivalence of {L}agrangian and {N}ewton-{E}uler
> dynamics for manipulators},
> journal = {The International Journal of Robotics Research},
> year = {1982},
> volume = {1},
> number = {2},
> pages = {60--70}
> }
>
> (Free download till the end of the month...)
>

I agree, I will change the name of the class to something more
appropriate.

> As a summary: the classes for Motion, Solver and State are still
> practically nowhere; Chain is already okay to some extent.
>
> Ruben (and some others, including myself) would give priority to a damped
> least-squares inverse kinematics (or dynamics, if you include the inertia
> as weighting matrix), for a generic serial and tree structure, including a
> "CLIK" solver (Closed-Loop Inverse Kinematics). See e.g.
> .
>

In addition to this, I feel that handling joint limits while computing inverse
kinematics would also be very useful.

> On Wed, Jul 23, 2008 at 11:52 PM, Herman Bruyninckx
> <Herman [dot] Bruyninckx [..] ...> wrote:
>>
>> Maybe I looked at the wrong place, but in your code I don't see where you
>> adapt the inertia during an iteration over the joints... The inertia than
>> joint i feels must be a function of the inertias of all more distal
>> joints...
>>
>
> In this method we solve for the constraint forces and the joint
> toques. The effect
> of the inertias of the distal joints is reflected in the constraint
> forces. The backward
> recursion (solving for the forces) starts from the last link and goes
> up the chain.

And where are these inertial "constraint" forces calculated...?

>> A side note: "Newton_Euler" or "Lagrange" is not useful information, since
>> the recursive techniques that are derived in both frameworks are the same
>> :-)
>> See @Article{ Silver82,
>> author = {Silver, D. B.},
>> title = {On the equivalence of {L}agrangian and {N}ewton-{E}uler
>> dynamics for manipulators},
>> journal = {The International Journal of Robotics Research},
>> year = {1982},
>> volume = {1},
>> number = {2},
>> pages = {60--70}
>> }
>>
>> (Free download till the end of the month...)
>>
>
> I agree, I will change the name of the class to something more
> appropriate.
>
>> As a summary: the classes for Motion, Solver and State are still
>> practically nowhere; Chain is already okay to some extent.
>>
>> Ruben (and some others, including myself) would give priority to a damped
>> least-squares inverse kinematics (or dynamics, if you include the inertia
>> as weighting matrix), for a generic serial and tree structure, including a
>> "CLIK" solver (Closed-Loop Inverse Kinematics). See e.g.
>> .
>>
>
> In addition to this, I feel that handling joint limits while computing inverse
> kinematics would also be very useful.
>
Absolutely! Personally, I currently prefer a "potential field" approach, in
which a virtual repelling force is added to joint that come close to their
limits. (Or, an attractive force that tries to keep them close to their
optimal configuration.) But I would welcome very much a more in-depth
discussion about all sorts of alternatives!

There are two loops in the function InverseDynamics in the class
ChainIdSolver_NE.
The first loop determines the linear and angular velocities and
accelerations for each
link (using q, q_dot, q_dotdot).

Once these kinematic properties are known for each individual link, the second
loop calculates the joint torques and forces that will generate the
accelerations
calculated in the previous loop. Each link can now be treated individually.
The forces and torques for link i are those applied at joint i and -ve
of the forces
and torques at joint i+1 (newton's 3rd law).
The forces and torques not along the joint axes are the constraint forces.
I am calling the internal or reaction forces at the joints the
"constraint" forces
because they are enforcing the constraints that neighboring links will remain
connected through the joint.

The equations of motion are solved in the body coordinates -- gravity,
forces and
torques are expressed in the body coordinate frame and the inertia matrices are
constant.

Advait

On Thu, Jul 31, 2008 at 7:56 AM, Herman Bruyninckx
<Herman [dot] Bruyninckx [..] ...> wrote:
> On Fri, 25 Jul 2008, Advait Jain wrote:
>
>> On Wed, Jul 23, 2008 at 11:52 PM, Herman Bruyninckx
>> <Herman [dot] Bruyninckx [..] ...> wrote:
>>>
>>> Maybe I looked at the wrong place, but in your code I don't see where you
>>> adapt the inertia during an iteration over the joints... The inertia than
>>> joint i feels must be a function of the inertias of all more distal
>>> joints...
>>>
>>
>> In this method we solve for the constraint forces and the joint
>> toques. The effect
>> of the inertias of the distal joints is reflected in the constraint
>> forces. The backward
>> recursion (solving for the forces) starts from the last link and goes
>> up the chain.
>
> And where are these inertial "constraint" forces calculated...?
>
>>> A side note: "Newton_Euler" or "Lagrange" is not useful information,
>>> since
>>> the recursive techniques that are derived in both frameworks are the same
>>> :-)
>>> See @Article{ Silver82,
>>> author = {Silver, D. B.},
>>> title = {On the equivalence of {L}agrangian and
>>> {N}ewton-{E}uler
>>> dynamics for manipulators},
>>> journal = {The International Journal of Robotics Research},
>>> year = {1982},
>>> volume = {1},
>>> number = {2},
>>> pages = {60--70}
>>> }
>>>
>>> (Free download till the end of the month...)
>>>
>>
>> I agree, I will change the name of the class to something more
>> appropriate.
>>
>>> As a summary: the classes for Motion, Solver and State are still
>>> practically nowhere; Chain is already okay to some extent.
>>>
>>> Ruben (and some others, including myself) would give priority to a damped
>>> least-squares inverse kinematics (or dynamics, if you include the inertia
>>> as weighting matrix), for a generic serial and tree structure, including
>>> a
>>> "CLIK" solver (Closed-Loop Inverse Kinematics). See e.g.
>>> .
>>>
>>
>> In addition to this, I feel that handling joint limits while computing
>> inverse
>> kinematics would also be very useful.
>>
> Absolutely! Personally, I currently prefer a "potential field" approach, in
> which a virtual repelling force is added to joint that come close to their
> limits. (Or, an attractive force that tries to keep them close to their
> optimal configuration.) But I would welcome very much a more in-depth
> discussion about all sorts of alternatives!
>
> Herman
>

I now understand your code! :-) I didn't _check_ it, but I least know what
is done where...

In order to make the code checkable, it should get more documentation,
such as references to where the algorithm was published, and preferably
even down to the level of equation numbers in those references.

Thanks again!

Ruben (KDL maintainer) will be on vacation till August 18th, so nothing
concretely will happen before then.

Best regards,

Herman
> There are two loops in the function InverseDynamics in the class
> ChainIdSolver_NE.
> The first loop determines the linear and angular velocities and
> accelerations for each
> link (using q, q_dot, q_dotdot).
>
> Once these kinematic properties are known for each individual link, the second
> loop calculates the joint torques and forces that will generate the
> accelerations
> calculated in the previous loop. Each link can now be treated individually.
> The forces and torques for link i are those applied at joint i and -ve
> of the forces
> and torques at joint i+1 (newton's 3rd law).
> The forces and torques not along the joint axes are the constraint forces.
> I am calling the internal or reaction forces at the joints the
> "constraint" forces
> because they are enforcing the constraints that neighboring links will remain
> connected through the joint.
>
> The equations of motion are solved in the body coordinates -- gravity,
> forces and
> torques are expressed in the body coordinate frame and the inertia matrices are
> constant.
>
>
> Advait
>
>
>
> On Thu, Jul 31, 2008 at 7:56 AM, Herman Bruyninckx
> <Herman [dot] Bruyninckx [..] ...> wrote:
>> On Fri, 25 Jul 2008, Advait Jain wrote:
>>
>>> On Wed, Jul 23, 2008 at 11:52 PM, Herman Bruyninckx
>>> <Herman [dot] Bruyninckx [..] ...> wrote:
>>>>
>>>> Maybe I looked at the wrong place, but in your code I don't see where you
>>>> adapt the inertia during an iteration over the joints... The inertia than
>>>> joint i feels must be a function of the inertias of all more distal
>>>> joints...
>>>>
>>>
>>> In this method we solve for the constraint forces and the joint
>>> toques. The effect
>>> of the inertias of the distal joints is reflected in the constraint
>>> forces. The backward
>>> recursion (solving for the forces) starts from the last link and goes
>>> up the chain.
>>
>> And where are these inertial "constraint" forces calculated...?
>>
>>>> A side note: "Newton_Euler" or "Lagrange" is not useful information,
>>>> since
>>>> the recursive techniques that are derived in both frameworks are the same
>>>> :-)
>>>> See @Article{ Silver82,
>>>> author = {Silver, D. B.},
>>>> title = {On the equivalence of {L}agrangian and
>>>> {N}ewton-{E}uler
>>>> dynamics for manipulators},
>>>> journal = {The International Journal of Robotics Research},
>>>> year = {1982},
>>>> volume = {1},
>>>> number = {2},
>>>> pages = {60--70}
>>>> }
>>>>
>>>> (Free download till the end of the month...)
>>>>
>>>
>>> I agree, I will change the name of the class to something more
>>> appropriate.
>>>
>>>> As a summary: the classes for Motion, Solver and State are still
>>>> practically nowhere; Chain is already okay to some extent.
>>>>
>>>> Ruben (and some others, including myself) would give priority to a damped
>>>> least-squares inverse kinematics (or dynamics, if you include the inertia
>>>> as weighting matrix), for a generic serial and tree structure, including
>>>> a
>>>> "CLIK" solver (Closed-Loop Inverse Kinematics). See e.g.
>>>> .
>>>>
>>>
>>> In addition to this, I feel that handling joint limits while computing
>>> inverse
>>> kinematics would also be very useful.
>>>
>> Absolutely! Personally, I currently prefer a "potential field" approach, in
>> which a virtual repelling force is added to joint that come close to their
>> limits. (Or, an attractive force that tries to keep them close to their
>> optimal configuration.) But I would welcome very much a more in-depth
>> discussion about all sorts of alternatives!
>>
>> Herman
>>
>

> On Wednesday 23 July 2008 15:41:02 Herman Bruyninckx wrote:
>> On Wed, 23 Jul 2008, Alexis Maldonado wrote:
>>> Hi *,
>>>
>>> This is a small patch to the Joint class of KDL to add storage of joint
>>> limits. We hope to use them in some new inverse kinematic solvers, and
>>> Ruben Smits (sitting next to me at a summer school) suggested this
>>> change.
>>
>> Thanks! But in my opinion, this is only a short term solution :-) Why?
>> Because some solvers will also need limits on the joint velocities, or
>> joint accelerations, or joint torques, etc. We should not add all these to
>> the generic Joint class, since then that becomes too overloaden.
>
> I can remember a discussion about the different levels of the KDL classes in
> the sence that you can have a Position level, a Velocity level and an
> Acceleration level. Each of this levels adds level specific properties to the
> Joint:

Indeed...

> This will result in the following for the joint:
>
> Position level:
> - get the cartesian pose of the joint (using joint position value), see
> Joint::pose in the API
> - position limits
> - stiffness

The stiffness could also be a _function_, if it is nonlinear (which it
normally is...).

> Velocity level adds:
> - get the cartesian velocity of the joint (using the joint velocity value),
> see Joint::twist int the API
> - velocity limits
> - damping
>
> Acceleration level adds:
> - get the acceleration of the joint (we do not have this yet)
> - acceleration limits
> - mass
The "mass" is then the mass of the motor shaft? Because the Body should
contain the link inertia...

> I'm not really sure where the torque values/properties should be added.
That's one of the reasons I think it might be good to take these properties
outside of the Joint class... (And into the Solver and Chain, maybe...?)

> Hi *,
>
> This is a small patch to the Joint class of KDL to add storage of joint
> limits. We hope to use them in some new inverse kinematic solvers, and Ruben
> Smits (sitting next to me at a summer school) suggested this change.
>
Thanks! But in my opinion, this is only a short term solution :-) Why?
Because some solvers will also need limits on the joint velocities, or
joint accelerations, or joint torques, etc. We should not add all these to
the generic Joint class, since then that becomes too overloaden.

My suggestion is to make different kinds of Joints, each with their
specific set of properties...