Main Page

Orocos Wiki pages are organised in 'books'. In each book you can create child pages, edit them and move them around. The Wiki itself creates an overview of the child pages of each book.

To create a new page, click 'Add Child page' below. To edit a page, click on the Edit tab of that page. You can also write down a link to a to be written page using the Example Page syntax. Click below to read the rest of this post.This is the Main Orocos.org Wiki page.

From here you can find links to all Orocos related Wiki's.

Orocos Wiki pages are organised in 'books'. In each book you can create child pages, edit them and move them around. The Wiki itself creates an overview of the child pages of each book.

To create a new page, click 'Add Child page' below. To edit a page, click on the Edit tab of that page. You can also write down a link to a to be written page using the Example Page syntax. When that link is clicked and the page does not exist, one is offered to create it and write it.

Currently, the Orocos wiki pages are written in MediaWiki style. You should create your pages in this style as well.

Feel free to click on the 'Edit' tab above to see how this page was written (and to improve it ! ).

The master branch gets updated when new branches are merged into it by its maintainer. This can be a merge from the bugfix branches (ie merge from toolchain-2.x) or a merge from a development branch.

The stable branch should always point to the latest toolchain-2.x tip. This isn't automated, and so it lags (probably something for a hudson job or a git commit hook).

All branches in the rtt-2.0-... are no longer updated. The rtt-2.0-mainline has been merged with master, which means that if you have a rtt-2.0-mainline branch, you can just do git pull origin master, and it will fast-forward your tree to the master branch, or you checkout the local master.

Contributing packages

You may contribute a software package to the community. It must respect the rules set out in the Component Packages section. Packages general enough can be adopted by the Orocos Toolchain Gitorious project. Make sure that your package name only contains word and number characters and underscores. A 'dash' ('-') is not acceptable in a package name.

Contributing patches

Small contributions should go on the mailing lists as patches. Larger features are best communicated using topic branches in a git repository clone from the official repositories. Send pull requests to the mailing lists. These topic branches should be hosted on a publicly available git server (e.g. github, gitorious).

NB for the Orocos v1, no git branches will be merged (due to SVN), use individual patches instead. v2 git branches can be merged without problems.

Making suggestions

The easiest way to make suggestions is to use the mailing list (register here). This allows discussion about what you are suggesting (which after all, someone else may already be working), as well as informing others of what you are interested in (or are willing to do).

Reporting bugs

Before reporting a bug, please check the Bug Tracker and in the Mailing list and Forum to see whether this is a known issue. If this is a new issue then TBD email the mailing lists, OR enter an issue in the bug tracker

point to a prefix and have autoproj find out things (for ROS installs)

Sharing Orocos components across use-cases

Using rock components on plain Orocos should just work [needs testing and documentation].

Using rock tools on plain Orocos

The use of orogen or typegen would be required

as far as we know, there is no missing "core" functionality in orogen to make typegen usable for "core" orocos libraries like KDL. Need to make some functionality such as opaques available to typegen though (only available to orogen currently). This can be done through the ability to make typegen load an oroGen SL file (trivial)

allow passing -I options directly to both oroGen and typeGen

mechanism to define "side-loading" typekits that define constructors and operators separately for scripting

the core of the Rock tooling is orocos.rb. Need to test and update orocos.rb so that it can work without a model. Method: update the test suite by mocking TaskContext#model to return nil and/or getModelName to not exist. From there, test tools like oroconf and vizkit

Dataflow between ROS and Rock / plain Orocos

need data conversions: one must be able to publish a C++ type over a ROS topic and vice-versa

typegen generation for ROS messages

type specification when creating ROS streams (since the ROS topic and the orocos port might have different types)

type conversions on the data flow: use already existing constructor infrastructure to do the conversion, need to create the channel element and change the connection code

add type conversion support in oroGen (equivalent system than opaques)

Other discussed topics

make TypeInfo very thin so that we can register it once per type and never change it. Only transports / constructors / ... could then be overriden

Roadmap

Real-time logging

The goal is to provide a real-time safe, low overhead, flexbible logging system useable throughout an entire system (ie within both components and any user applications, like GUIs).

We chose to base this on log4cpp, one of the C++-based derivates of log4j, the respected Java logging system. With only minor customimzations log4cpp is now useable in user component code (but not RTT code itself, see below). It provides real-time safe, hierarchical logging with multiple levels of logging (e.g. INFO vs DEBUG).

Near future

Provide a complete system example demonstrating use of the real-time logging framework in both user components, and a GUI-based application. Based on the v2 toolchain.

Add logging system stress tests. (l already have this for v1, but need to port to v2 and submit)

Able have multiple appenders per category. This is simply a technical limitation of the initial approach, and should be readily changeable.

Long term plans

Replace the existing RTT::Logger functionality with the real-time logging framework. This really can't involve rewriting all the logging statements in RTT, etc.

Provide levels of DEBUG logging. Some logging system use FINE, FINER, FINEST levels, whilst others use DEBUG plus an integer level within debug (e.g. debug-1 thru debug-9, from verbose to most-verbose). Chose one approach, and modify log4cpp to support it.

Support use by scripting and state machines (possibly also Lua?). This means both being able to log, as well as being able to configure categories, appenders, etc.

Catkin-ROS build-support plan

Target versions

These changes are for Toolchain >= 2.7.0 + ROS >= Hydro

Goals

Support building in these workflows:

Autoproj managed builds (Rock-style)

depends on: manifest.xml for meta-build info.

Rock users don't use the UseOrocos.cmake macros since their CMakeLists +pc files get generated anyway by orogen.

CMake changes or new macros

Roadmap ideas for 3.x

While the project is still in the (heavy?) turmoil of the 1.x-to-2.x transition, it might be useful to start thinking about the next version, 3.x. Below are a number of developments and policies that could eventually become 3.x; please, use the project's (user and developer) mailinglists to give your opinions, using a 3.x Roadmap message tag.

Disclaimer: there is nothing official yet about any of the below-mentioned suggestions; on the contrary, they are currently just the reflections of one single person, Herman Bruyninckx. (Please, update this disclaimer if you add your own suggestions.)

General policies to be followed in this Roadmap:

the anti-Not Invented Here policy: whenever there exists a FOSS project that has already a solution for (part of) this roadmap, we should try to cooperate with that project, instead of putting efforts in our own version.

the big critical mass projects first policy: when being confronted with the situation above, it is much preferred to cooperate with (contribute to) projects that have a high critical mass (Cmake, Linux, Eclipse, Qt, etc.) instead of with single-person or single-team projects, even when the latter currently have better functionalities and ideas. At the same time, promising single-person projects will be stimulated to make their efforts useful in a larger critical mass project.

Orocos distribution

Much can be improved to bring Orocos closer to users, and the concept of a simple-to-install distribution is a proven best practice. However, Orocos should not try to develop its own distribution, but should rather hook on to existing, successful efforts. ROS is the obvious first choice, and the orocos_toolchain_ros is the concrete initiative that has already started in this direction. However, this "only" makes "some" relevant low-level Orocos functionality available in a form that is easier to install for many robotics users; in order to allow users to profit from all Orocos functionality, the following extra steps have to be set:

a "Hello Robot!" application, installable as a ROS stack. It could contain a simulated robot, visualised in Morse or Gazebo, and componentized in an RTT component, together with an RTT/KDL/BFL-based set of motion controllers and estimators. (Morse is currently the most promising candidate, from a component-based development point of view.)

a (Wiki) book that explains the whole setup, not just from a software point of view, but also a motivation why the presented example could be considered as a "best practice" as a robotics system. This Wiki book should not be an Orocos-only effort, but be useful for the whole community.

a similar "Hello Machine!" application, targeting not the robotics community, but the mechatronics, or machine tools community.

Contributors to this part of the Roadmap need not be RTT developers, but motivated users!

RTT

The road towards better decoupling, as started in 2.x, is designed and implemented further:

The OROMACS development at the University of Twente have already produced a core of the composite component. That concept is required for a full support of the Model-Driven Engineering approach.

the connection is the data-less, event-less and command-less representation of the architecture of a system, consisting of only the identification of which components will interact with each other.

the difference between a port and an interface is that a port belongs to a component, and implements an interface; the interface in itself must become a first-class citizen of the component model.

discrete behaviour is the current state machine. Further developments in this context are probably only to be expected at the implementation and tooling front.

communication: Orocos has had, from day one, the ambition to not provide communication middleware, since there are so many other projects that do that. RTT should, however, improve its decoupling of (i) using data structures inside a component, (ii) providing them for communication in a port, and (iii) transporting them from one component's port to another component's port. Maybe this is as easy as cleanly separating the configuration files for all three aspects; maybe it's more involved than that.

the mapping on real hardware resources (computational thread, communication field bus) is separated from the definition of a component.

the process of defining data flow data structures is supported by an IDL language. This IDL has to be chosen together with other projects, and should not be an Orocos-only effort. A real IDL includes the definition of the meaning of the fields in the data, and not just their computer language representation.

the codel idea of GenoM3 is supported for the construction of continuous behaviour inside a component. The important role of the codel idea in the context of realtime systems is that one should give the component designer full control over when which computations are to be executed (instead of relying on the OS scheduler); this requires a design in which computations can be subdivided in pre-emptible pieces (codels), and in which they can be scheduled in efficient Directed Acyclic Graphs.

Contributors to this part of the Roadmap need be RTT developers!

BFL, KDL, SCL

SCL does not yet exist, but there is a high and natural need for a Systems and Control Library, next to BFL and KDL.

All three libraries share a common fundamental design property, and that is that they can all be considered as special cases of executable graphs, so a common support will be developed for the flexible, configurable scheduling off all computations (codels) in complex networks (Bayesian networks, kinematic/dynamic networks, control diagrams).

Contributors to this part of the Roadmap need not be RTT developers, but domain experts that have become power users of the RTT infrastructure!

iTaSC and beyond

A usable robotics control systems consists, of course, not only of RTT, BFL, KDL and/or SCL components, but there is an obvious need for a task primitive: the brain that contains all the knowledge about when to use which component, with what configuration, and until what conditions are being satisfied.

As a first step, the instantaneous version of a constrained-based optimization approach to task-level control will be provided. Following steps will extend the instantaneous idea towards non-instantaneous tasks. This extension must be focused on tasks that require realtime performance, since non-realtime solutions are provided by other projects, such as ROS.

Contributors to this part of the Roadmap need not be RTT developers, but domain experts that also happen to be average users of the RTT infrastructure! They will open up the functionalities of Orocos to the normal end-user.

Tooling

More and improved tools have been a major feature of the 2.x evolution. The major tooling effort for 3.x, will be to bring the above-mentioned component model into the Eclipse eco-system.

The first efforts in this direction have started, in the context of the European project BRICS.

Contributors to this part of the Roadmap need not be RTT developers, but programmers familiar with the advanced Eclipse features, such as ecore models, EMG, etc.

European Robotics Forum 2011 Workshop on the Orocos Toolchain

At the European Robotics Forum 2011 Intermodalics, Locomotec and K.U.Leuven are organizing a two-part seminar, appealing to both industry and research institutes, titled:

In this presentation, Peter Soetens and Ruben Smits introduce the audience to todays Open Source robotics eco-system. Which are the strong and weak points of existing software ? Which work seamlessly together, and on which operating systems (Windows, Linux, VxWorks,... ) ? We will prove our statements with practical examples from both academic and industrial use cases. This presentation is the result of the long standing experience of the presenters with a open source technologies in robotics applications and will offer the audience leads and insights to further explore this realm.

Exploring the Orocos Toolchain

In this hands-on session, the participants are invited to bring their own laptop with Orocos and ROS (optionally) installed. We will support Linux, Mac OS-X and Windows users and will provide instructions on how they can prepare to participate.YouBot: A real and simulated YouBot will be used

YouBot Demo Setup The workshop will start with making you familiar with the Orocos Toolchain, which does not require the YouBot. The hands-on will continue then on a robot in simulation and on the real hardware. We will use the ROS communication protocol to send instructions to the simulator (Gazebo) or the YouBot. Installing Gazebo is not required, since this simulation will run on a dedicated machine. Documentation on the workshop application and the assignment can be found at https://github.com/bellenss/euRobotics_orocos_ws/wiki.

Registration

You first need to register for attending the euRobotics Forum. Registration for the workshop is mandatory, but free of charge. For the hands-on session, we will limit the number of participants to 20. The workshop is guided by 6 experienced Orocos users. Please register your participation by sending an email to info at intermodalics dot eu. We will confirm your participation with a short notice. Later-on, you will receive a second email with more details about how to prepare. You should receive this second, detailed email in the week of March, 20, 2011.

euRobotics Forum Linux Setup

Toolchain Installation

The installation instructions depend on if you have ROS installed or not.

NOTE: ROS is required to participate in the YouBot demo.

With ROS on Ubuntu Lucid/Maverick

Install Diamondback ROS using Debian packages for Ubuntu Lucid (10.04) and Maverick (10.10) or the ROS install scripts, in case you don't run Ubuntu.

With ROS on Debian, Fedora or other systems

We did not succeed in releasing the Diamondback 0.3.0 binary packages for your target of the Orocos Toolchain. This means that you need to build this 'stack' yourself with 'rosmake', after you installed ROS (See http://www.ros.org/wiki/diamondback/Installation). This 'rosmake' step may take about 30 minutes to an hour, depending on your laptop.

Without ROS

Workshop Sources

euRobotics Forum Mac OS-X Setup

Toolchain Installation

Due to a dynamic library issue in the current 2.3 release series, Mac OS-X can not be supported during the Workshop. We will make available a bootable USB stick which contains a pre-installed Ubuntu environment containing all necessary packages.

euRobotics Forum Windows Setup

Toolchain Installation

Windows users can participate in the first part of the hands-on where Orocos components are created and used. Pay attention that installing the Orocos Toolchain on Win32 platforms may take a full day in case you are not familiar with CMake, compiling Boost or any other dependency of the RTT and OCL.

You need to follow the instructions for RTT/OCL v2.3.1 or newer, which you can download from the Orocos Toolchain page. We recommend to build for Release.

In case you have no time nor the experience to set this up, we provide bootable USB sticks that contain Ubuntu Linux with all workshop files.

Workshop Sources

An additional package is being prepared that will contain the workshop files. See euRobotics Workshop Sources for downloading the sources.

Windows users might also install the Kst program which is a KDE plot program that also runs on Linux. We provided a .kst file for plotting the workshop data. See the Kst download page.

Testing Your Setup

In case you completed building and installing RTT and OCL, you can launch a cygwin or cmd.exe prompt and run the orocreate-pkg script to create a new package, which is in your c:\orocos\bin directory. Make sure that your PATH variable is propertly extended with

set PATH=%PATH%;c:\orocos\bin;c:\orocos\lib;c:\orocos\lib\orocos\win32;c:\orocos\lib\orocos\win32\plugins

(replace c:\orocos with the actual installation path, which might also be c:\Program Files\orocos)

You repeat the classical CMake steps with this package, generate the Solution file and build and install it. Then start up the deployer with the deployer-win32.exe program and type 'ls'. It should start and show meaningful information. If you see strange characters in the output, you need to turn of the colors with the '.nocolors' command at the Deployer's prompt.

euRobotics Workshop Material

The euRobotics Forum workshop on Orocos has been a great success. About 30 people attended and participated in the hands-on workshop. The Real-Time & Open Source in Robotics track drew more than 60 people. Both tracks were overbooked.

ROS users

You have built RTT from the orocos_toolchain_ros package. Make sure that you source the /opt/ros/diamondback/setup.bash script and that the unpacked exercises are under a directory of the ROS_PACKAGE_PATH:

Registration

You first need to register for attending the euRobotics Forum. Registration for the workshop is mandatory, but free of charge. For the hands-on sessions (hands-on 1 and hands-on 2), we will limit the number of participants to 20. The workshops are guided by different experienced Orocos users. Please register your participation by sending an email to info at intermodalics dot eu indicating which workshops you want to attend. We will confirm your participation with a short notice. Later-on, you will receive a second email with more details about how to prepare. You should receive this second, detailed email in the week of February, 27, 2012.

Motivation and objective

The workshop consists of three rather independent parts. It is advised but not required to follow the preceding session(s) when attending session two or three.

The first session is a presentation session, it introduces the basic concepts of Orocos application programming, followed by rFSM state charts and the iTaSC framework.

The second session is a hands-on session, that aims at making the participants familiar with rFSM state charts, which is a powerful though easy to use tool for robotic coordination and supervision tasks,

The third sessions is also a hands-on session, that aims at introducing the concepts of constraint-based motion specification using the iTaSC framework. This framework and its software implementation was developed at the KU Leuven during the past years. It's key advantages are the composability of (partial) constraints and reusability of the constraint specification. The software is an open-source project, which has recently reached its 2.0 version.

Approach

Presentation session, giving a high-level overview of rFSM and iTaSC by introducing the key concepts.

Hands-on session: guided exercise where the participants will have to create an application with interacting state machines, that can be used for example to coordinate the behavior of the iTaSC application of the following session.

Hands-on session: guided exercise where the participants will have to create an application consisting of multiple tasks on a robot in simulation. Eg. Drawing a figure on a table and avoiding a moving obstacle with a Kuka Youbot.

Feedback form

Participant feedback is gratefully appreciated. Please fill in the feedback form. Some browsers/pdf viewers do not support in-browser usage of the form. To avoid problems, please download the form first.

The geometric relations semantics software (C++) implements the geometric relation semantics theory, hereby offering support for semantic checks for your rigid body relations calculations. This will avoid commonly made errors, and hence reduce application and, especially, system integration development time considerably. The proposed software is to our knowledge the first to offer a semantic interface for geometric operation software libraries.

The screenshot below shows the output of the semantic checks of the (wrong) composition of two positions and two orientations.

Output of the semantic checks of the (wrong) composition of two positions and two orientations

The goal of the software is to provide semantic checking for calculations with geometric relations between rigid bodies on top of existing geometric libraries, which are only working on specific coordinate representations. Since there are already a lot of libraries with good support for geometric calculations on specific coordinate representations (The Orocos Kinematics and Dynamics library, the ROS geometry library, boost, ...) we do not want to design yet another library but rather will extend these existing geometric libraries with semantic support. The effort to extend an existing geometric library with semantic support is very limited: it boils down to the implementation of about six function template specializations.

What is it?

This wiki contains a summary of the article accepted as a tutorial for IEEE Robotics and Automation Magazine on the 4th June 2012.

Rigid bodies are essential primitives in the modelling of robotic devices, tasks and perception, starting with the basic geometric relations such as relative position, orientation, pose, translational velocity, rotational velocity, and twist. This wiki elaborates on the background and the software for the semantics underlying rigid body relationships. This wiki is based on the research of the KU Leuven robotics group, in this case mainly conducted by Tinne De Laet, to explain semantics of all coordinate-invariant properties and operations, and, more importantly, to document all the choices that are made in coordinate representations of these geometric relations. This resulted in a set of concrete suggestions for standardizing terminology and notation, and software with a fully unambiguous software interface, including automatic checks for semantic correctness of all geometric operations on rigid-body coordinate representations.

Logic errors in geometric relation calculations: A lot of logic errors can occur during geometric relation calculations. For instance (there is no need to understand the details just have a look at the difference in syntax), the inverse of is , while the inverse of the translational velocity is . When using the semantic representation proposed in this paper, the semantics of the inverse geometric relation can be automatically derived from the forward geometric relation, preventing logic errors. A second example emerges when composing the relations involving three rigid bodies: in order to get the geometric relation of with respect to body one can compose the geometric relation between and third body with the geometric relation between body and the body (and not the geometric relation between the body and the body for instance). Such a logic constraint can be checked easily by including, for instance, the body and reference body in the semantic representation of the geometric relations.

Composition of twists with different velocity reference point: Composing twists requires a common velocity reference point (i.e. the twists have to express the translational velocity of the same point on the body). By including the velocity reference point of the twist in the semantic representation, this constraint can be checked explicitly.

Composition of geometric relations expressed in different coordinate frames: Composing geometric relations using coordinate representations like position vectors, translational and rotational velocity vectors, and 6D vector twists, requires that the coordinates are expressed in the same coordinate frame. By including the coordinate frame in the coordinate semantic representation of the geometric relations, this constraint can be checked explicitly.

Composition of poses and orientation coordinate representations in wrong order: The rotation matrix and homogeneous transformation matrix coordinate representations can be composed using simple multiplication. Since matrix multiplication is however not commutative, a common error is to use a wrong multiplication order in the composition. The correct multiplication order can however be directly derived when including the bodies, frames, and points in the coordinate semantic representation of the geometric relations.

Integration of twists when velocity reference point and coordinate frame do not belong to same frame: A twist can only be integrated when it expresses the translational velocity of the origin of the coordinate frame the twist is expressed in. When including the velocity reference point and the coordinate frame in the coordinate semantic representation of the twist, this constraint can be explicitly checked.

Background

This wiki contains a summary of the article accepted as a tutorial for IEEE Robotics and Automation Magazine on the 4th June 2012.

Background and terminology

A rigid body is an idealization of a solid body of infinite or finite size in which deformation is neglected. We often abbreviate “rigid body” to “body”, and denotes it by the symbol . A body in three-dimensional space has six degrees of freedom: three degrees of freedom in translation and three in rotation. The subspace of all body motions that involve only changes in the orientation is often denoted by SO(3) (the Special Orthogonal group in three-dimensional space). It forms a group under the operation of composition of relative motion. The space of all body motions, including translations, is denoted by SE(3) (the Special Euclidean group in three-dimensional space).

A general six-dimensional displacement between two bodies is called a (relative) pose: it contains both the position and orientation. Remark that the position, orientation, and pose of a body are not absolute concepts, since they imply a second body with respect to which they are defined. Hence, only the relative position, orientation, and pose between two bodies are relevant geometric relations.

A general six-dimensional velocity between two bodies is called a (relative) twist: it contains both the rotational and the translational velocity. Similar to the position, orientation, and pose, the translational velocity, rotational velocity, and twist of a body are not absolute concepts, since they imply a second body with respect to which they are defined. Hence, only the relative translational velocity, rotational velocity, and twist between two bodies are relevant geometric relations.

When doing actual calculations with the geometric relations between rigid bodies, one has to use the coordinate representation of the geometric relations, and therefore has to choose a coordinate frame in which the coordinates are expressed in order to obtain numerical values for the geometric relations.

Semantics

Geometric primitives

The geometric relations between bodies are described using a set of geometric primitives:

A (spatial) point is the primitive to represent the position of a body. Points have neither volume, area, length, nor any other higher dimensional analogue. We denote points by the symbols , , ...

A vector is the geometric primitive that connects a point to a point . It has a magnitude (the straight-line distance between the two points), and a direction (from to ). To express the magnitude of a vector, a (length) scale must be chosen.

An orientation frame represents an orientation, by means of three orthonormal vectors indicating the frame’s X-axis , Y-axis , and Z-axis . We denote orientation frames by the symbols , , ...

A (displacement) frame represents position and orientation of a body, by means of an orientation frame and a point (which is the orientation frame’s origin). We denote frames by the symbols , , ...

Each of these geometric primitives can be fixed to a body, which means that the geometric primitive coincides with the body not only instantaneously, but also over time. For the point and the body for instance, this is written as . The figure below presents the geometric primitives body, point, vector, orientation frame, and frame graphically.

Geometric Primitives

Geometric relations

The table below summarizes the semantics for the following geometric relations between rigid bodies: position, orientation, pose, translational velocity, rotational velocity, and twist.

Geometric relations

Force, Torque, and Wrench

Screw theory, the algebra and calculus of pairs of vectors that arise in the kinematics and dynamics of rigid bodies, shows the duality between wrenches, consisting of the torque and force vectors, and twists, consisting of translational and rotational velocity vectors. The parallelism between translational, rotational velocity, and twist on the one hand, and torque, force, and wrench on the other hand, is directly reflected in the semantic representation (see the table below) and the coordinate representations.

The software implements the geometric relation semantics, hereby offering support for semantic checks for your rigid body relations. This will avoid commonly made errors, and hence reduce application (and, especially, system integration) development time considerably. The proposed software is to our knowledge the first to offer a semantic interface for geometric operation software libraries.

The design idea

The goal of the geometric_relations_semantics library is to provide semantic checking for calculations with geometric relations between rigid bodies on top of existing geometric libraries, which are only working on specific coordinate representations. Since there are already a lot of libraries with good support for geometric calculations on specific coordinate representations (The Orocos Kinematics and Dynamics library, the ROS geometry library, boost, ...) we do not want to design yet another library but rather will extend these existing geometric libraries with semantic support. The effort to extend an existing geometric library with semantic support is very limited: it boils down to the implementation of about six function template specializations.

For the semantic checking, we created the (templated) geometric_semantics core library, providing all the necessary semantic support for geometric relations (relative positions, orientations, poses, translational velocities, rotational velocities, twists, forces, torques, and wrenches) and the operations on these geometric relations (composition, integration, inversion, ...).

If you want to perform actual geometric relation calculations, you will need particular coordinate representations (for instance a homogeneous transformation matrix for a pose) and a geometric library offering support for calculations on these coordinate representations (for instance multiplication of homogeneous transformation matrices). To this end, you can build your own library depending on the geometric_semantics core library in which you implement a limited number of functions, which make the connection between semantic operations (for instance composition) and actual coordinate representation calculations (for instance multiplication of homogeneous transformation matrices). We already provide support for two geometric libraries: the Orocos Kinematics and Dynamics library and the ROS geometry library, in the geometric_semantics_kdl and geometric_semantics_tf libraries, respectively.

The design

For every geometric relation (position, orientation, pose, translational velocity, rotational velocity, twist, force, torque, and wrench) the geometric_semantics library contains four classes. Here we will explain the design with the position geometric relation, but all other geometric relations have a similar design. For the position geometric relation there are four classes:

PositionSemantics: This class contains the semantics of the (coordinate-free) Position geometric relation. For instance in this case it contains the information on the point, reference point, body, and reference body.

PositionCoordinatesSemantics: This class contains a PositionSemantics object of the geometric relation at hand and the extra semantic information needed for semantics of position coordinate geometric relation, i.e. the coordinate frame in which the coordinates are expressed.

PositionCoordinates: This templated class contains the actual coordinate representation of the geometric relation, for instance a position vector for the position geometric relation. The template is the actual geometry object (of an external library) you will use as a coordinate representation, for instance a KDL::Vector.

Position: This templated class is a composition of a PositionCoordinatesSemantics object and a PositionCoordinates object. In case you want both semantic support and want to do actual geometric calculations, this is the level you will work at.

Again, the template is the actual geometry (of an external library) you will use as a coordinate representation, for instance a KDL::Vector.

The above described design is illustrated by the figure below. Position geometric relation design

Position: to do both semantic checking and the actual geometric calculations.

Pose, Twist, and Wrench

We need to give some extra information on the pose, twist, and wrench geometric relations since they can be represented as a composition of two other geometric relations (Pose = Position + Orientation, Twist = TranslationalVelocity + RotationalVelocity, Wrench = Force + Torque) or as a new geometric relation. For example we could want to use a homogeneous transformation matrix as a coordinate representation of a pose, and in this case we would also want, for efficiency reasons, to do direct calculations on the homogeneous transformation matrices. In another case we want to represent the pose as the composition of a position (with for instance a position vector as a coordinate representation) and an orientation (with for instance Euler angles as a coordinate representation). The software allows both designs as illustrated in the two figures below. Pose geometric relation design as a basic geometric relationPose geometric relation design as an composition of a Position and Orientation geometric relation

Quick start

Overview

The framework is ordered following a OROCOS-ROS approach and consists of one stack:

geometric_relations_semantics.

This stack consists of following packages:

geometric_semantics: geometric_semantics is the core of the geometric_relations_semantics stack and provides c++ code for the semantic support of geometric relations between rigid bodies (relative position, orientation, pose, translational velocity, rotational velocity, twist, force, torque, and wrench). If you want to use semantic checking for the geometric relation operations between rigid bodies in your application you can check the geometric_semantics_examples package. If you want to create support for your own geometry types on top of the geometric_semantics package, the geometric_semantics_kdl provides a good starting point.

geometric_semantics_examples: geometric_semantics_examples groups some examples showing how the geometric_semantics can be used to provide semantic checking for the geometric relations between rigid bodies in your application.

geometric_semantics_orocos_typekit: geometric_semantics_orocos_typekit provides Orocos typekit support for the geometric_semantics types, such that the geometric semantics types are visible within Orocos (in the TaskBrowser component, in Orocos script, reporting, reading and writing to files (for instance for properties), ... ).

geometric_semantics_msgs: geometric_semantics_msgs provides ROS messages matching the C++ types defined on the geometric_semantics package, in order to support semantic support during message based communication.

geometric_semantics_msgs_conversions: geometric_semantics_msgs_conversions provides support conversions between geometric_semantics_msgs and the C++ geometric_semantics types defined on the geometric_semantics package.

geometric_semantics_msgs_kdl: geometric_semantics_kdl provides support for orocos_kdl types on top of the geometric_semantics package (for instance KDL::Frame to represent the relative pose of two rigid bodies). If you want to create support for your own geometry types on top of the geometric_semantics package, this package provides a good starting point.

geometric_semantics_msgs_tf: geometric_semantics_tf provides support for tf datatypes (see http://www.ros.org/wiki/tf/Overview/Data%20Types) on top of the geometric_semantics package (for instance tf::Pose to represent the relative pose of two rigid bodies).

geometric_semantics_tf_msgs: geometric_semantics_tf_msgs provides ROS messages matching the C++ types defined on the geometric_semantics_tf package, in order to support semantic support for tf types during message based communication.

geometric_semantics_tf_msgs_conversions: geometric_semantics_tf_msgs_conversions provides support conversions between geometric_semantics_tf_msgs and the C++ geometric_semantics_tf types defined on the geometric_semantics_tf package.

Each package contains the following subdirectories:

src/ containing the source code of the components (mainly C++ or python for the ROS msgs support).

Installation instructions

Warning, so far we only provide support for linux-based systems. For Windows or Mac, you're still at your own, but we are always interested in your experiences and in extensions of the installation instructions, quick start guide, and user guide.

User guide

Setting the build options of the core library

You can customize the behavior of the semantic checking (checking or not, and screen output or not) by changing the build options of the geometric_semantics library (see CMakeLists.txt of geometric_semantics package)

add_definitions(-DCHECK): when using this build flag, the semantic checking will be enabled.

add_definitions(-DOUTPUT_CORRECT): when using this build flag, you will get screen output for operations that are semantically correct.

add_definitions(-DOUTPUT_WRONG): when using this build flag, you will get screen output for operations that are semantically wrong.

Using the geometric relations semantics in your own application

Here we will explain how you can use the geometric relations semantics in your application, in particular using the Orocos Kinematics and Dynamics library as a geometry library, supplemented with the semantic support.

Preparing your own application using the ROS-build system

Create a new ROS package (in this case with name myApplication), with a dependency on the geometric_semantics_kdl:

roscreate-pkg myApplication geometric_semantics_kdl

This will automatically create a directory with name myApplication and a basic build infrastructure (see the roscreate-pkg documentation)

Add the newly created directory to your ROS_PACKAGE_PATH environment variable:

cd myApplication
export ROS_PACKAGE_PATH=$PWD:$ROS_PACKAGE_PATH

Writing your own application

Go to the application directory:

roscd myApplication

Create a main C++ file

touch myApplication.cpp

Edit the C++ file with your favorite editor

Include the necessary headers. For instance:

#include <Pose/Pose.h>#include <Pose/PoseCoordinatesKDL.h>

It can be convenient to use the geometric_semantics namespace and for instance the one of your geometry library (in this case KDL):

usingnamespace geometric_semantics;usingnamespace KDL;

In your main you should create the necessary geometric relations. For instance for a pose, first create the KDL coordinates:

Now you are ready to do actual calculations using semantic checking. For instance to take the inverse:

Pose<KDL::Frame> poseB1_B2 = poseB2_B1.inverse()

Building your own application

To build you application you should edit the CMakeLists.txt file created in you application directory. Add the your C++ main file to be build as an executable adding the following line:

rosbuild_add_executable(myApplication myApplication.cpp)

Now you are ready to build, so type

rosmake myApplication

and the executable will be created in the bin directory.

To run the executable do:

bin/myApplication

You will get the semantic output on your screen.

Extending your geometry library with semantic checking

Imagine you have your own geometry library with support for geometric relation coordinate representations and calculations with these coordinate representations. You however would like to have semantic support on top of this geometry library. Probably the best thing to do in this case is to mimic our support for the Orocos Kinematics and Dynamics Library. To have a look at it do:

roscd geometric_semantics_kdl/

Template specialization

The only thing you have to do is write template specializations. So for instance to get support for KDL::Rotation, which is a coordinate representation for a Orientation geometric relation, you have to write the template specialization for OrientationCoordinates<T>, i.e. OrientationCoordinates<KDL::Rotation>.

Semantic constraints invoked by your coordinate representations

The first thing to find out is which semantic constraints are invoked by the particular coordinate representation you use. For instance a KDL::Rotation represents a 3x3 rotation matrix and invokes the semantic constraint that the reference orientation frame is equal to the coordinate frame.

The possible semantic constraints are listed in the *Coordinates.h files in the geometric_semantics core library. So for instance for OrientationCoordinates we find there an enumeration of the different possible semantic constraints imposed by Orientation coordinate representations:

/**
*\brief Constraints imposed by the orientation coordinate representation to the semantics
*/enum Constraints{
noConstraints =0x00,
coordinateFrame_equals_referenceOrientationFrame =0x01, // constraint that the orientation frame on the reference body has to be equal to the coordinate frame };

You should specify the constraint when writing the template specialization of the OrientationCoordinates<KDL::Rotation>:

Specializing other functions to do actual coordinate calculations

The other function template specializations specify the actual coordinate calculations that have to be performed for semantic operations like inverse, changing the coordinate frame, changing the orientation frame, ... For instance, to specialize the inverse for KDL::Rotation coordinate representations:

Tutorials

Setting up a package and the build system for your application

This tutorial explains (one possibility) to set up a build system for your application using the geometric_relations_semantics. The possibility we explain uses the ROS package and build infrastructure, and will therefore assume you have ROS installed and set up on your computer.

Create a new ROS package (in this case with name myApplication), with a dependency on the geometric_semantics library and for instance the geometric_semantics_kdl library:

This tutorial assumes you have prepared a ROS package with name myApplication and that you have set your ROS_PACKAGE_PATH environment variable accordingly, as explained in this tutorial.

In this tutorial we first explain how you can create basic semantic objects (without coordinates and coordinate checking) and perform semantic operations on them. We will show how you can create any of the supported geometric relations: position, orientation, pose, transational velocity, rotational velocity, twist, force, torque, and wrench.

Remark that the file resulting from following this tutorial is attached to this wiki page for completeness.

Prepare the main file

Go to the directory of our first application using:

roscd myApplication

Create a main file (in this tutorial called myFirstApplication.cpp) in which we will put the code of our first application.

If you execute the program you will get screen output on the semantic correctness of the compositions (if not check the build flags of your geometric_semantics library as explained in the user guide. You can print and check the result of the composition using:

Your second application using semantic checking on geometric relations including coordinate checking

This tutorial assumes you have prepared a ROS package with name myApplication and that you have set your ROS_PACKAGE_PATH environment variable accordingly, as explained in this tutorial.

In this tutorial we first explain how you can create basic semantic objects (without coordinates but with coordinate checking) and perform semantic operations on them. We will show how you can create any of the supported geometric relations: position, orientation, pose, transational velocity, rotational velocity, twist, force, torque, and wrench.

Remark that the file resulting from following this tutorial is attached to this wiki page for completeness.

Prepare the main file

Prepare a mySecondApplication.cpp main file as explained in this tutorial.

Building your second application

To build you application you should edit the CMakeLists.txt file created in you application directory. Add the your C++ main file to be build as an executable adding the following line:

rosbuild_add_executable(mySecondApplication mySecondApplication.cpp)

Now you are ready to build, so type

rosmake myApplication

and the executable will be created in the bin directory.

To run the executable do:

bin/mySecondApplication

You will get the semantic output on your screen.

Creating the geometric relations coordinates semantics

We will start with creating the geometric relation coordinates semantics objects for the relation between body C with point a and orientation frame [e], and body D with point b and orientation frame [f], all expressed in coordinate frame [r]:

If you execute the program you will get screen output on the semantic correctness (and mark: in this case also incorrectness) of the compositions (if not check the build flags of your geometric_semantics library as explained in the user guide. You can print and check the result of the composition using:

Your third application doing actual geometric calculations on top of the semantic checking

This tutorial assumes you have prepared a ROS package with name myApplication and that you have set your ROS_PACKAGE_PATH environment variable accordingly, as explained in this tutorial.

In this tutorial we first explain how you can create full geometric relation objects (with semantics and actual coordinate representation) and perform operations on them. We will show how you can create any of the supported geometric relations: position, orientation, pose, transational velocity, rotational velocity, twist, force, torque, and wrench. To this end we will use the coordinate representations of the Orocos Kinematics and Dynamics Library. The semantic support on top of this geometry library is already provided by the geometric_semantics_kdl package.

Remark that the file resulting from following this tutorial is attached to this wiki page for completeness.

Prepare the main file

Prepare a myThirdApplication.cpp main file as explained in this tutorial.

Building your Third application

To build you application you should edit the CMakeLists.txt file created in you application directory. Add the your C++ main file to be build as an executable adding the following line:

rosbuild_add_executable(myThirdApplication myThirdApplication.cpp)

Now you are ready to build, so type

rosmake myApplication

and the executable will be created in the bin directory.

To run the executable do:

bin/myThirdApplication

You will get the semantic output on your screen.

Creating the geometric relations

We will start with creating the geometric relation objects for the relation between body C with point a and orientation frame [e], and body D with point b and orientation frame [f], all expressed in coordinate frame [r], together with their coordinate representations using KDL types.

// Creating the geometric relations // a Position with a KDL::Vector
Vector coordinatesPosition(1,2,3);
Position<Vector> position("a","C","b","D","r",coordinatesPosition);// an Orientation with KDL::Rotation
Rotation coordinatesOrientation=Rotation::EulerZYX(M_PI/4,0,0);
Orientation<Rotation> orientation("e","C","f","D","f",coordinatesOrientation);// a Pose with a KDL::Frame
KDL::Frame coordinatesPose(coordinatesOrientation,coordinatesPosition);
Pose<KDL::Frame> pose1("a","e","C","b","f","D","f",coordinatesPose);// a Pose as aggregation of a Position and a Orientation
Pose<Vector,Rotation> pose2(position,orientation);// a LinearVelocity with a KDL::Vector
Vector coordinatesLinearVelocity(1,2,3);
LinearVelocity<Vector> linearVelocity("a","C","D","r",coordinatesLinearVelocity);// a AngularVelocity with a KDL::Vector
Vector coordinatesAngularVelocity(1,2,3);
AngularVelocity<Vector> angularVelocity("C","D","r",coordinatesAngularVelocity);// a Twist with a KDL::Twist
KDL::Twist coordinatesTwist(coordinatesLinearVelocity,coordinatesAngularVelocity);
geometric_semantics::Twist<KDL::Twist> twist1("a","C","D","r",coordinatesTwist);// a Twist of a LinearVelocity and a AngularVelocity
geometric_semantics::Twist<Vector,Vector> twist2(linearVelocity,angularVelocity);// a Torque with a KDL::Vector
Vector coordinatesTorque(1,2,3);
Torque<Vector> torque("a","C","D","r",coordinatesTorque);// a Force with a KDL::Vector
Vector coordinatesForce(1,2,3);
Force<Vector> force("C","D","r",coordinatesForce);// a Wrench with a KDL::Wrench
KDL::Wrench coordinatesWrench(coordinatesForce,coordinatesTorque);
geometric_semantics::Wrench<KDL::Wrench> wrench1("a","C","D","r",coordinatesWrench);// a Wrench of a Force and a Torque
geometric_semantics::Wrench<KDL::Vector,KDL::Vector> wrench2(torque,force);

Doing geometric operations

We can for instance take the inverses of the created geometric relation by:

If you execute the program you will get screen output on the semantic correctness (and mark: in this case also incorrectness) of the compositions (if not check the build flags of your geometric_semantics library as explained in the user guide. You can print and check the result of the composition using:

Some extra examples

In case you are looking for some extra examples you can have a look at the geometric_semantics_examples package. So far it already contains an example showing the advantage of using semantics when integrating twists, and when programming two position controlled robots.

FAQ

Use cases

Semantics Reasoning

Coordinate Semantics Reasoning

Coordinate Calculations

Coordinate Semantics Reasoning and Coordinate Calculations

KDL wiki

Kinematic Chain

Skeleton of a serial robot arm with six revolute joints. This is one example of a kinematic structure, reducing the motion modelling and specification to a geometric problem of relative motion of reference frames. The Kinematics and Dynamics Library (KDL) develops an application independent framework for modelling and computation of kinematic chains, such as robots, biomechanical human models, computer-animated figures, machine tools, etc. It provides class libraries for geometrical objects (point, frame, line,... ), kinematic chains of various families (serial, humanoid, parallel, mobile,... ), and their motion specification and interpolation.

User Manual

Why to use KDL?

Kinematic Trees: chain and tree structures. In literature, multiple definitions exist for a kinematic structure: a chain as the equivalent for all types of kinematic structures (chain, tree, graph) or chain as the serial version of a kinematic structure. KDL uses the last, or using graph-theory terminology:

A closed-loop mechanism is a graph,

an open-loop mechanism is a tree, and

an unbranched tree is a chain.

Next to kinematics, also parameters for dynamics are included (inertia...)

KDL::Vector

A Vector is a 3x1 matrix containing X-Y-Z coordinate values. It is used for representing: 3D position of a point wrt a reference frame, rotational and translational part of a 6D motion or force entity : <equation id="vector"><equation>

Creating Vectors

Vector v1;//The default constructor, X-Y-Z are initialized to zero
Vector v2(x,y,z);//X-Y-Z are initialized with the given values
Vector v3(v2);//The copy constructor
Vector v4 = Vector::Zero();//All values are set to zero

Get/Set individual elements

The operators [ ] and ( ) use indices from 0..2, index checking is enabled/disabled by the DEBUG/NDEBUG definitions:

Composing frames

You can use the operator * to compose frames. If you have a Frame F_A_B that expresses the pose of frame B wrt frame A, and a Frame F_B_C that expresses the pose of frame C wrt to frame B, the calculation of Frame F_A_C that expresses the pose of frame C wrt to frame A is as follows:

Frame F_A_C = F_A_B * F_B_C;

F_A_C.p is the location of the origin of frame C expressed in frame A, and F_A_C.M is the rotation of frame C expressed in frame A.

Multiply/Divide with a scalar

Adding/subtracting Wrenchs

Comparing Wrenchs

Element by element comparison with or without user-defined accuracy:

w1==w2;
w1!=w2;
Equal(w1,w2,eps);

Twist and Wrench transformations

Wrenches and Twists are expressed in a certain reference frame; the translational Vector vel of the Twists and the moment Vector torque of the Wrenches represent the velocity of, resp. the moment on, a certain reference point in that frame. Common choices for the reference point are the origin of the reference frame or a task specific point.

The values of a Wrench or Twist change if the reference frame or reference point is changed.

Changing only the reference point

If you want to change the reference point you need the Vector v_old_new from the old reference point to the new reference point expressed in the reference frame of the Wrench or Twist:

t2 = t1.RefPoint(v_old_new);
w2 = w1.RefPoint(v_old_new);

Changing only the reference frame

If you want to change the reference frame but want to keep the reference point intact you can use a Rotation matrix R_AB which expresses the rotation of the current reference frame B wrt to the new reference frame A:

ta = R_AB*tb;
wa = R_AB*wb;

Note: This operation seems to multiply a 3x3 matrix R_AB with 6x1 matrices tb or wb, while in reality it uses the 6x6 Screw transformation matrix derived from R_AB.

Changing both the reference frame and the reference point

If you want to change both the reference frame and the reference point you can use a Frame F_AB which contains (i) Rotation matrix R_AB which expresses the rotation of the current reference frame B wrt to the new reference frame A and (ii) the Vector v_old_new from the old reference point to the new reference point expressed in A:

ta = F_AB*tb;
wa = F_AB*wb;

Note: This operation seems to multiply a 4x4 matrix F_AB with 6x1 matrices tb or wb, while in reality it uses the 6x6 Screw transformation matrix derived from F_AB.

First order differentiation and integration

t is the twist that moves frame A to frame B in timestep seconds. t is expressed in reference frame w using the origin of A as velocity reference point.

Kinematic Trees

A KDL::Chain or KDL::Tree composes/consists of the concatenation of KDL::Segments. A KDL::Segment composes a KDL::Joint and KDL::RigidBodyInertia, and defines a reference and tip frame on the segment. The following figures show a KDL::Segment, KDL::Chain, and KDL::Tree, respectively. At the bottom of this page you'll find the links to a more detailed description.

KDL segment

Black: KDL::Segment:

reference frame {F_reference} (implicitly defined by the definition of the other frames wrt. this frame)

tip frame {F_tip}: frame from the end of the joint to the tip of the segment, default: Frame::Identity(). The transformation from the joint to the tip is denoted T_tip (in KDL directly represented by a KDL::Frame). In a kinematic chain or tree, a child segment is added to the parent segment's tip frame (tip frame of parent=reference frame of the child(ren)).

composes a KDL::Joint (red) and a KDL::RigidBodyInertia (green)

Red: KDL::Joint: single DOF joint around or along an axis of the joint frame {F_joint}. This joint frame has the same orientation as the the reference frame {F_reference} but can be offset wrt. this reference frame by the vector p_origin (default: no offset).

Green: KDL::RigidBodyInertia: Cartesian space inertia matrix, the arguments are the mass, the vector from the reference frame {F_reference} to cog (p_cog) and the rotational inertia in the cog frame {F_cog}.

KDL chainKDL tree

Select your revision: (1.0.x is the released version, 1.1.x is under discussion (see kinfam_refactored git branch))

Pose and twist of a Joint

f is the pose resulting from moving the joint from its zero position to a joint value qt is the twist expressed in the frame corresponding to the zero position of the joint, resulting from applying a joint speed qdot

Pose and twist of a Segment

fis the pose resulting from moving the joint from its zero position to a joint value q and expresses the new tip frame wrt the root frame of the Segment s. t is the twist of the tip frame expressed in the root frame of the Segment s, resulting from applying a joint speed qdot at the joint value q

Trees are constructed by adding segments, existing chains or existing trees to a given hook name. The methods will return false if the given hook name is not in the tree. These functions add copies of the arguments, not the arguments themselves!

Pose and twist of a Joint

f is the pose resulting from moving the joint from its zero position to a joint value qt is the twist expressed in the frame corresponding to the zero position of the joint, resulting from applying a joint speed qdot

Pose and twist of a Segment

fis the pose resulting from moving the joint from its zero position to a joint value q and expresses the new tip frame wrt the root frame of the Segment s. t is the twist of the tip frame expressed in the root frame of the Segment s, resulting from applying a joint speed qdot at the joint value q

KDL::Chain

a kinematic description of a serial chain of bodies connected by joints.

built out of KDL::Segments.

A Chain has

a default constructor, creating an empty chain without any segments.

a copy-constructor, creating a copy of an existing chain.

a =-operator is supported too.

Chain chain1;
Chain chain2(chain3);
Chain chain4 = chain5;

Chains are constructed by adding segments or existing chains to the end of the chain. All segments must have a different name (or "NoName"), otherwise the methods will return false and the segments will not be added. The functions add copies of the arguments, not the arguments themselves!

Trees are constructed by adding segments, existing chains or existing trees to a given hook name. The methods will return false if the given hook name is not in the tree. These functions add copies of the arguments, not the arguments themselves!

Kinematic and Dynamic Solvers

KDL contains for the moment only generic solvers for kinematic chains. They can be used (with care) for every KDL::Chain.

The idea behind the generic solvers is to have a uniform API. We do this by inheriting from the abstract classes for each type of solver:

ChainFkSolverPos

ChainFkSolverVel

ChainIkSolverVel

ChainIkSolverPos

A seperate solver has to be created for each chain. At construction time, it will allocate all necessary resources.

A specific type of solver can add some solver-specific functions/parameters to the interface, but still has to use the generic interface for it's main solving purpose.

The forward kinematics use the function JntToCart(...) to calculate the Cartesian space values from the Joint space values. The inverse kinematics use the function CartToJnt(...) to calculate the Joint space values from the Cartesian space values.

Recursive forward kinematic solvers

For now we only have one generic forward and velocity position kinematics solver.

It recursively adds the poses/velocity of the successive segments, going from the first to the last segment, you can also get intermediate results by giving a Segment number:

The FRI and RSI interface, provide you with an Orocos component that you can add to your robot application to handle the communication with the robot controller.

A readme file with the main installation steps is provided with the code (git or svn checkout). All comments, discussions, questions and suggestions are very welcome at the mailing list: see http://lists.mech.kuleuven.be/mailman/listinfo/kuka-lwr for info on how to subscribe.

Where 'C:/Documents and Settings/virtual/My documents/' is the directory where you unpacked the downloads.

Continue to configure OCL in the cmake GUI by turning off the NO_GPL flag (by default on on Windows). It will then try to link the taskbrowser with the readline.lib file, which should succeed. After installing ocl, readline should work as on Linux, but only on the standard cygwin or cmd.exe prompts, not on rxvt

quotes from "Really Reusable Robot Code and the Player/Stage Project"

Further some significants parts of the paper "Really Reusable Robot Code and the Player/Stage Project" have been copied. The purpose it to present a possible philosophy to drive the development of OCL 2.0 (it is recommended to read the entire paper). Feel free to discuss these concepts in the forum.

Our design philosophy is heavily influenced by the operating systems (OS) community, which has already solved many of the same problems that we face in robotics research. For example, the principle function of an operating system is to hide the details of the underlying hardware, which may vary from machine to machine. Similarly, we want to hide the details of the underlying robot. Just as I expect my web browser to work with any mouse, I want my navigation system to work with any robot. Where OS programmers have POSIX, we want a common development environment for robotic applications. Operating systems are equipped with standard tools for using and inspecting the system, such as (in UNIX variants) top, bash, ls, and X11. We desire a similar variety of high-quality tools to support experimental robotics.
Operating systems also support virtually any programming language and style. They do this by allowing the low-level OS interface (usually written in C) to be easily wrapped in other languages, and by providing language-neutral interfaces (e.g., sockets, files) when possible. Importantly, no constraints or normative judgments are made on how best to structure a program that uses the OS. We take the same approach in building robotics infrastructure. Though not strictly part of the OS, another key feature of modern development environments is the availability of standard algorithms and related data structures, such as qsort(), TCP, and the C++ Standard Template Library. We follow this practice of incorporating polished versions of established algorithms into the common code repository, so that each researcher need not re-implement, for example, Monte Carlo localization. Finally, an important but often over-looked aspect of OS design is that access is provided at all levels. While most C programmers will manage memory allocation with the library functions malloc() and free(), when necessary they can dig deeper and invoke the system call brk() directly. We need the same multi-level access for robots; while one researcher may be content to command a robot with high-level “goto” commands, another will want to directly control wheel velocities.
Player comprises four key abstractions: The Player Abstract Device Interface (PADI), the message protocol, the transport mechanism, and the implementation. Each abstraction represents a reusable and separable layer. For example, the TCP client/server transport could be replaced by a CORBA
The central abstraction that enables portability and code re-use in Player is the PADI speci?cation. The PADI defines the syntax and semantics of the data that is exchanged between the robot control code and the robot hardware. For ease of use, the PADI is currently specified as a set of C message structures; the same information could instead be written in an Interface Definition Language (IDL), such as the one used in CORBA systems. The PADI’s set of abstract robot control interfaces constitutes a virtual machine, a target platform for robot controllers that is instantiated at run time by particular devices. The goal of the PADI is to provide a virtual machine that is rich enough to support any foreseeable robot control system, but simple enough to allow for an e?cient implementation on a wide array of robot hardware. The key concepts used in the PADI, both borrowed from the OS community, are the character device model and the driver/interface model.
The interface/driver model groups devices by logical functionality, so that devices which do approximately the same job appear identical from the user’s point of view. An interface is a specification for the contents of the data stream, so an interface for a robotic character device maps the input stream into sensor readings, output stream into actuator commands, and ioctls into device configurations. The code that implements the interface, converting between a device’s native formats and the interface’s required formats is called a driver. Drivers are usually speci?c to a particular device, or a family of devices from the same vendor. Code that is written to target the interface rather than any specific device is said to be device independent. When multiple devices have drivers that implement the same interface, the controlling code is portable among those devices. Many hardware devices have unique features that do not appear in the standard interface. These features are accessed by device-specific ioctls, while the read and write streams are generally device independent. Interfaces should be designed to be suficiently complete so as to not require use of device-specific ioctls in normal operation, in order to maintain device independence and portability. There is not a one-to-one mapping between interface definitions and physical hardware components. For example, the Pioneer’s native P2OS interface bundles odometry and sonar data into the same packet, but a Player controller that only wants to log the robot’s position does not need the range data. For portability, Player separates the data into two logical devices, decoupling the logical functionality from the details of the Pioneer’s implementation. The pioneer driver controls one physical piece of hardware, the Pioneer microcontroller, but implements two different devices: position2d and sonar. These two devices can be opened, closed, and controlled independently, relieving the user of the burden of remembering details about the internals of the robot.
In order to more conveniently support different devices, we introduced the interface/driver distinction to Player. An interface, such as sonar, is a generic specification of the format for data, command, and configuration interactions that a device allows. A driver, such as pioneer-sonar, specifies how the low-level device control will be carried out. In general, more than one driver may support a given interface; conversely, a given driver may support multiple interfaces. Thus we have extended to robot control the device model that is used in most operating systems, where, for example, a wide variety of joysticks all present the same “joystick” interface to the programmer.
The primary cost of adherence to a generic interface for an entire class of devices is that the features and functionality that are unique to each device are ignored. Imagine a fiducial-finder interface whose data format includes only the bearing and distance to each fiducial. In order to support that interface, a driver that can also determine a fiducial’s identity will be under-utilized, some of its functionality having been sacrificed for the sake of portability. This issue is usually addressed by either adding configuration requests to the existing interface or defining a new interface that exposes the desired features of the device. Consider Player’s Monte-Carlo localization driver amcl; it can support both the sophisticated localization interface that includes multiple pose hypotheses, and the simple position2d interface that includes one pose and is also used by robot odometry systems.
These higher-level drivers use other drivers, instead of hardware, as sources of data and sinks for commands. The amcl driver, for example, is an adaptive Monte Carlo localization system [TFBD00] that takes data from a position2d device, a laser device, and a map device, and in turn provides robot pose estimates via the localize interface (as mentioned above, amcl also supports the simpler position2d interface, through which only the most likely pose estimate is provided). Other Player drivers perform functionality such as path-planning, obstacle avoidance, and various image-processing tasks. The development of such higher-level drivers and corresponding interfaces yields three key benefits. First, we save time and effort by implementing well-known and useful algorithms in such a way that they are immediately reusable by the entire community. Just as C programmers can call qsort() instead of reimplementing quicksort, robotics students and researchers students should be able to use Player’s vfh driver instead of reimplementing the Vector Field Histogram navigation algorithm [UB98]. The author of the driver benefits by having her code tested by other scientists in environments and with robots to which she may not have access, which can only improve the quality of the algorithm and its implementation. Second, we create a common development environment for implementing such algorithms. Player’s C++ Driver API clearly defines the input/output and startup/shutdown functionality that a driver must have. Code that is written against this API can enter a community repository where it is easily understood and can be reused, either in whole or in part. Finally, we create an environment in which alternative algorithms can be easily substituted. If a new localization driver implements the familiar localize interface, then it is a drop-in replacement for Player’s amcl. The two algorithms can be run in parallel on the same data and the results objectively compared.

RTT v1.x wiki

This wiki has only information for the RTT 1.x releases. For RTT 2.x, look at the Toolchain wiki.

Documentation suggestions

From recent discussion on ML, simply a place to put down ideas before we forget them ...

Use Wiki for FAQ instead of XML doc

FAQ

My shared libraries won't load

The deployer won't load my plugins

Can I use dynamic memory allocation, and where?

How do I run in real time? ie how do I configure my system to allow

Orocos to run in real-time

Why do I have periodic delays when attaching a remote deployer?

Configuring OmniORB instead of TAO

OmniORB options for IDL

How do I set a client application using pyOmniOrb (OmniORB python bindings)?

<quote> Actually it's an option of the omniidl compiler... the command to use is

omniidl -bcxx -Wba myIdlFile.idl

This will become definately a FAQ item :-) <quote>

My wiki page is blank

<quote> When your text is not appearing on your wiki page, it's because you ended your wiki page with an indented line. So if your last line is:

Examples and Tutorials

The tutorials and example code are split in two parts, one for new users and one for experienced users of the RTT.

There are several sources where you can find code and tutorials. Click below to read the rest of this post.The tutorials and example code are split in two parts, one for new users and one for experienced users of the RTT.

There are several sources where you can find code and tutorials. Some code is listed in wiki pages, other is downloadable in a separate package and finally you can find code snippets in the manuals too.

Simple examples

RTT Examples Get started with simple, ready-to-compile examples of how to create a component

Assumptions

The build directory is within the source directory. This helps with dynamic library loading.

Compatabilitry

Tested on v1.8 trunk on Mac OS X Leopard with omniORB from MacPorts, and Ubuntu Jaunty with ACE/TAO.

Files

See the attachments at the bottom of this page.

Overview

An RTT toolkit plugin provides information to Orocos about one or more custom types. This type of plugin allows RTT to display your types values in a deployer, load/save your types to/from XML files, and provides constructors and operators that can be used to manipulate your types within program scripts and state machines.

The toolkit plugin is in the root directory, with supporting test files in the tests directory.

CMake support files are in the config directory.

The transport plugin is in the corba directory, with supporting test files in the corba/tests directory.

Limitations

Currently, this example does

Show how to write a plugin telling Orocos about your custom types

Show how to write a transport plugin allowing Orocos to move your custom types between deployers/processes.

Demonstrate how to test said plugins.

Use either ACE/TAO or OmniORB for CORBA support

Currently, this example does not yet

Show how to read/write the custom types to/from XML file

Provide manipulators and/or accessors of your custom types, that can be used in scripts and state machines.

Does not demonstrate testing of the CORBA transport plugin within a single deployer, using two components. An optimization in RTT bypasses the CORBA mechanism in this case, rendering the test useless.

Does not deal with all intricacies of the boost types (eg all of the special values).

NB I could not find a method to get at the underlying raw 64-bit or 96-bit boost representation of ptime. Hence, the transport plugin inefficiently transports a ptime type using two separate data values. If you know of a method to get at the raw representation, I would love to know. Good luck in template land ...

Note that Orocos now knows the correct types (eg boost_ptime) and can display each ports value. Issue multiple ls commands and you will see the values change. The ptime is simply the date and time at which the send component set the port value, and the duration is the time between port values being set on each iteration (ie this should approximately be the period of the send component).

Toolkit plugin

The toolkit plugin is defined in BoostToolkit.hpp.

namespace Examples
{/// \remark these do not need to be in the same namespace as the plugin/// put the time onto the stream
std::ostream& operator<<(std::ostream& os, const boost::posix_time::ptime& t);/// put the time onto duration the stream
std::ostream& operator<<(std::ostream& os, const boost::posix_time::time_duration& d);/// get a time from the stream
std::istream& operator>>(std::istream& is, boost::posix_time::ptime& t);/// get a time duration from the stream
std::istream& operator>>(std::istream& is, boost::posix_time::time_duration& d);

The toolkit plugin is contained in an Examples namespace. First up we define input and output stream operators for each of our types.

The actual plugin class and singleton object are then defined. The plugin provides a name that is unique across all plugins, and contains information on the types, constructors and operators for each or our custom types.

We then provide a type information class for each of our two custom types. These type info classes are the mechanism for Orocos to work with XML and our custom types.NB the true boolean value to each TypeInfo class indicates that stream operators are available (as defined above).

Next we create the singleton instance of the plugin as BoostToolkit. TODO explain naming scheme. Then we declare the unique name of this plugin, "Boost".

bool BoostPlugin::loadTypes(){
TypeInfoRepository::shared_ptr ti = TypeInfoRepository::Instance();/* each quoted name here (eg "boost_ptime") must _EXACTLY_ match that
in the associated TypeInfo::composeTypeImpl() and
TypeInfo::decomposeTypeImpl() functions (in this file), as well as
the name registered in the associated Corba plugin's
registerTransport() function (see corba/BoostCorbaToolkit.cpp)
*/
ti->addType(new BoostPtimeTypeInfo("boost_ptime"));
ti->addType(new BoostTimeDurationTypeInfo("boost_timeduration"));returntrue;}

The loadTypes() method provides the actual association for Orocos, from a type name to a TypeInfo class. This is how Orocos identifies a type at runtime. The choice of name is critical - it is what is shown in the deployer for an items type, and should make immediate sense when you see it. It probably also should not be too long, to keep things readable within the deployer and taskbrowser. The name you use here for each type is very important and must match with names in other places (TODO list the other places?).

bool BoostPlugin::loadConstructors(){// no constructors for these particular typesreturntrue;}bool BoostPlugin::loadOperators(){// no operators for these particular typesreturntrue;}

Currently this example does not provide any constructors or operators, useable in program scripts and state machines. TODO update this.

The implementation of a TypeInfo class for one of our custom types must use the same type name as use in loadTypes() above. These functions would also provide the mechanism to load/save the type to/from XML. TODO update this.

ORO_TOOLKIT_PLUGIN(Examples::BoostToolkit)

This macro (lying outside the namespace!) takes the fully qualified singleton, and makes it available to the RTT type system at runtime. It basically makes the singleton identifiable as an RTT toolkit plugin, when Orocos loads the dynamic library formed from this toolkit.

Build system

Now the build system takes this .cpp file, and turns it into a dynamic library. We are going to examine the root CMakeLists.txt to see how to create this library, but for now, we will ignore the corba parts of that file.

The create_component macro makes an Orocos shared library for us. This library will contain only our toolkit plugin. Note that we make the library name dependent on the Orocos target we are building for (eg macosx or gnulinux). This allows us to have plugins for multiple architectures on the same machine (typically, gnulinux and xenomai, or similar). We also have to link the shared library against the boost "date time" library, as we are using certain boost functionality that is not available in the header files.

SUBDIRS(tests)

Lastly we also build the 'test' directory.

Tests

There are two very simply test components that communicate each of our custom types between them. Tests are very important when developing plugins. Trying to debug a plugin within a complete system is a daunting challenge - do it in isolation first.

The send component regularly updates the current time on its ptime port, and the duration between ptime port updates on its timeDuration port.

To build

For other operating systems substitute the appopriate value for "macosx" when setting OROCOS_TARGET (e.g. "gnulinux").

Tested in Mac OS X Leopard 10.5.7.

To run

In a shell

cd /path/to/plugins/build/corba/tests
./corba-recv

In a second shell

cd /path/to/plugins/build/corba/tests
./corba-send

Now the same exact two test components of Parts 1 and 2 are in separate processes. Typing ls in either process will present the same values (subject to network latency delays, which typically are not human perceptible) - the data and types are now being communicated between deployers.

Now, the transport plugin is responsible for communicating the types between deployers, while the toolkit plugin is responsible for knowing each type and being able to display it. Separate responsibilities. Separate plugins.

NB for the example components, send must be started after recv. Starting only corba-recv and issuing ls will display the default values for each type. Also, quitting the send component and then attempting to use the recv component will lockup the recv deployer. These limitations are not due to the plugins - they are simply due to the limited functionality of these test cases.

Without the transport plugin

Running the same two corba test programs but without loading the transport plugin, is instructive as to what happens when you do not match up certain things in the toolkit sources. This is very important!

The culprit here is that we tried to pass unknown types through CORBA. While the toolkit plugin tells Orocos about a type, it takes a transport plugin to tell Orocos how to communicate the type. The above failure indicates that Orocos came across a type named unknown_t and did not know how to deal with it. We will cover this more later in the tutorial, and specifically where and why this occurs. As a matter of interest, comparing the sources of corba/tests/corba-recv.cpp and corba/tests/corba-recv-no-toolkit.cpp, the differences are

We send a time duration as individual time components. Note that we avoid boost's fraction_secionds fiasco, and always send nanoseconds even if the sender or receiver implementations only support microseconds.

// can't get at underlying type, so send this way (yes, more overhead)// see BoostCorbaConversion.hpp::struct AnyConversion<boost::posix_time::ptime>// for further details.struct ptime
{// julian daylong date;
time_duration time_of_day;};};};

I was not able to find a way to get to the native 64 or 96 bits that define a ptime value. Consequently, we inefficiently send a ptime as a julian day and a time duration within the day. Adequate for an example, but definitely more data than we would like to send.

Note that CORBA IDL knows about certain types already, e.g. short and long, and that we can use our time_duration structure in later structures.

We will come back to this IDL file during the build process.

The transport plugin

The actual plugin is defined in corba/BoostCorbaToolkit.hpp. This is the equivalent of the BoostToolkit.hpp file, except for a transport plugin.

The transport plugin provides its name, the name of its transport mechanism, and a function to register the transport into Orocos. Note that no types are mentioned here as that is taken care of by the toolkit plugin. A transport plugin without a corresponding toolkit plugin is useless. Orocos will not know about the types and hence will not even make it to looking up transports for a given type.

The implementation of the plugin is in corba/BoostCorbaToolkit.cpp, and is very straight forward.

Registering a transport registers each type for a given transport protocol (the ORO_CORBA_PROTOCOL_ID above, defined in rtt/src/corba/CorbaLib.hpp). Each type of transport must have a unique protocol ID, though currently Orocos only supports one, CORBA. Registration occurs automatically when the transport is loaded.

Here we pick up some standard RTT types, and the I/O operators for our custom boost types. We also pick up BoostTypesC.h. This is a file that CORBA generates from our BoostTypes.idl file above, and contains CORBA-specific code. Ignore its contents, but just realise that it is generated from the .idl file.

// must be in RTT namespace to match some rtt/corba codenamespace RTT
{

For some historical reason, I believe this has to be in the RTT namespace. Not sure if that is still true, but ... maybe it is to match the generated output from the .idl file?

Here we define some shorthand types, to make typing easier. I also find that having these two types names this way, Corba vs Std, makes it easier to read some of the later code. The actual Corba::timer_duration type comes from the files generated from our .idl file.

The last four of the following six functions are required by the CORBA library, to enable conversion between the CORBA and non-CORBA types. The two convert functions are their for convenience, and to save replicating code.

The above two functions do the actual work of converting data to/from the CORBA and standard types. In this case we can basically copy individual data members - more complicated types may require further conversions, manipulation, etc.

The above four functions are, as previously mentioned, the standard interface to convert types to/from CORBA types. While the syntax might appear a little strange to you (e.g "<<=" operator), you can just copy the above to your own custom types (I copy these between transport plugins, an advantage of creating CORBA and standard types at top). Note well the one dynamic allocation in the toAny() function: transport plugins are most definitely not real-time capable.

The same six functions then follow for our boost::ptime type. They are not covered in detail here.

Build system

IF (ENABLE_CORBA)
INCLUDE(${CMAKE_SOURCE_DIR}/config/UseCorba.cmake)

This include ensures we know about the CORBA library, and also picks up some CMake macros we need.

The ORO_ADD_CORBA_SERVERS CMake macro we go from UseCorba.cmake, takes a list of source files (CPPS), a list of header files (HPPS - we have none here) and a list of interface description files (IDLS), and creates the necessary CMake code to generate the CORBA files from the IDL files. Basically, this takes our BoostTypes.idl file and produces header and source files to deal with that CORBA type. Note that this macro appends to the existing files listed in CPPS and HPPS - we'll need them shortly.

INCLUDE_DIRECTORIES( ${CMAKE_CURRENT_BINARY_DIR}/. )

We now have our own source files in the source directory, as well as source files generated into the build directory. This ensures we can pick up the source files from the build directory as well.

Here we create a componet shared library, that contains only the transport plugins. Note that the library contains all the source files in the CPPS CMake variable, which now contains all the .cpp files in this directory (due to the "FILE(GLOB ...) statement) as well as the source files generated from the ORO_ADD_CORBA_SERVERS macro. These make up our transport toolkit. Fundamentally, the transport toolkit shared library is no different than a shared library of standard components, except for a tiny bit of C++ code that comes out of the ORO_TOOLKIT_PLUGIN() macro at the end of the BoostCorbaToolkit.cpp'' file. RTT then recognizes this shared library as containing a transport plugin.

SUBDIRS(tests)
ENDIF (ENABLE_CORBA)

And lastly, pick up the tests.

Tests

The corba test programs contain one component each, to distribute the two components and hence require the CORBA transport plugin. The exact same send and receive test components are used from Part 2.

The corba-send test program instantiates a send component, and uses an RTT ControlTaskProxy to represent the remote receive component.

Initialize the CORBA Orb, and then thread it (yes, this does use Proxy and Server functions - this is ok). This puts the CORBA Orb in a background thread, allowing us to run the taskbrowser (below) in the main thread.

We make the receive component a CORBA server, meaning that the send component will connect to this component. It could have been done the other way around - in this example, it simply impacts which test program has to be started first (the server must be running for the client to connect to it). Again we thread the ORB to put it in its own background thread.

Run the taskbrowser on the recieve component (in the main thread). Note that the send component is not mentioned anywhere. The "server" does not know about any "clients", but the "clients" do need to know about the server.

Rationale

Problem: How to reuse a component when you need the ports to have different names?

Solution: Name the connection between ports in the deployer. This essentially allows you to rename ports. Unfortunately, this extremely useful feature is not documented anywhere (as of July, 2009). <!-- break -->

Assumptions

The build directory is within the source directory. Click below to read the rest of this post.== Rationale ==

Problem: How to reuse a component when you need the ports to have different names?

Solution: Name the connection between ports in the deployer. This essentially allows you to rename ports. Unfortunately, this extremely useful feature is not documented anywhere (as of July, 2009). <!-- break -->

Assumptions

The build directory is within the source directory. This helps with dynamic library loading.

Admittedly, this is contrived example but the structure is very useful and occurs more frequently than you may realise (say using N copies of a camera component, deploying components for both a left and a right robot arm within the same deployer, etc).

The first section of the deployment file simply loads the Orocos libaries we use (including the KDL toolkit, so that we can inspect and modify KDL types within the deployer), and then loads our shared libary (libConnectionNaming).

Lastly, the robot component is created with its input port on a connection named cartesianPosition_desi.

Now, the deployer uses connection names when connecting components between peers, not port names. So it attempts to connect a Robot.cartesianPosition_desi connection to a Vehicle. cartesianPosition_desi connection (which in this part, matches the port names).

Build the library, and then run this part with

cd/path/to/ConnectionNaming/build
deployer-macosx -s ../Connect-1.xml

Examine the HMI and Robot components, and note that each has a connected port, and the port values match.

Part 2: HMI, one filter and a robot

This part adds a filter component between the HMI and the robot (see Connect-2.xml)

As with Part 1, the first part of the file loads the appropriate libraries (left out here, as it is identical to Part 1).

The Filter component is deployed with its input port being part of a connection named unfiltered_cartesianPosition_desi, while its output port is part of a connection named filtered_cartesianPosition_desi. Comparison with the HMI port/connections above, and the Robot port/connections below, you can see that the Filter's input port is connected to the HMI and the output port is connected to the Robot.

The robot component is the same as Part 1, except that its input port is part of a connection named filtered_cartesianPosition_desi (ie connected to the Filter).

Run this part with

cd/path/to/ConnectionNaming/build
deployer-macosx -s ../Connect-2.xml

Examine all three components, and note that all ports are connected, and in particular, that the HMI and Filter.inputPosition ports match while the Filter.outputPosition and Vehicle ports match (ie they have the 'x' axis filtered out).

Using connection naming allows us to connect ports of different names. This is particularly useful with a generic component like this filter, as in one deployment it may connect to a component with ports named cartesianPosition_desi, while in another deployment it may connect to ports named CartDesiPos, or any other names. The filter component is now decoupled from the actual port names used to deploy it.

Part 3: HMI, two filters and a robot

This part adds a second filter between the first filter and the robot.

The second filter has its input port part of a connection named filtered_cartesianPosition_desi (ie it is connected to Filter1's output port), and the second filter's output port is part of a connecton named double_filtered_cartesianPosition_desi (which as you will see, is connected to the robot's input port).

The only change in the robot component, from Part 2, is to change its peer to Filter2 and to use a connection named double_filtered_cartesianPosition_desi (ie connect it to Filter2).

Run this part with

cd/path/to/ConnectionNaming/build
deployer-macosx -s ../Connect-3.xml

Examine all components, and note which ports are connected, and what their values are. Note that the vehicle has two axes knocked out (x and y).

Points to note

WARNING The deployer displays port names for ports within components, while the OCL reporting component also uses port names. Only the act of connecting ports between peers when deploying a component network, makes use of the connection naming shown above.

Using connection naming allows us to reuse a component without resorting to renaming its ports or modifying its code in any way. This is an example of deployment-time configuration. Note that there are certainly instances where run-time configuration of port-names may be needed (eg the component has to name its ports based on the component name itself), but in our experience, deployment-time configuration is more frequent and decouples components better.

Note that as many filters as are required could be chained together in this manner, and that none of the input, output, nor filter components need know that they are connected in such a fashion. Decoupling is your friend, and allowed the Filter component writer to simply concentrate on writing a component that did one thing well: filtered a cartesian position (yes, a trivial example, but a valid point nonetheless).

You may notice that the deployment files do not specify peer combinations in pairs. The peers are mentioned in one direction only. We use this to decouple (yet again) a component from knowing what peers it is connected to, where possible. For example, Filter1 in both Parts 2 and 3 does not now what component is down-stream from it. It doesn't know, nor does it care, whether it is being filtered again, connected to a robot, or whatever. Again, decoupling. This can dramatically help when deploying large systems.

Simple TCP client using non-periodic component

Rationale

Problem: You want a component that connects to a remote TCP server, and reads data from it (this example could easily write, instead of reading). This component will block for varying amounts of time when reading.

Solution: Use a non-periodic component. This example outlines one method to structure the component, to deal with the non-blocking reads while still being responsive to other components, being able to run a state machine, etc.

<!-- break -->

Assumptions

Uses Qt sockets to avoid operating-system intracacies and differences when using actual sockets. The code can easily be modified to use bind(), accept(), listen(), etc. instead. It is the structure of the solution that we are interested in.

The build directory is within the source directory. This helps with dynamic library loading.

Does not attempt reconnection if unable connect on first attempt.

Non-robust error handling.

Does not validate property values (a robust component would validate that the timeouts were valid, eg. not negative, within a configureHook()).

Files

The .cpf file has a .txt extension simply to keep the wiki happy. To use the file, rename it to SimpleNonPeriodicClient.cpf.

Component definition

This is the class definition

class SimpleNonPeriodicClient :public RTT::TaskContext{protected:// DATA INTERFACE// *** OUTPUTS ***/// the last read data
RTT::WriteDataPort<std::string> lastRead_port;/// the number of items sucessfully read
RTT::Attribute<int> countRead_attr;// *** CONFIGURATION ***// name to listen for incoming connections on, either FQDN or IPv4 addres
RTT::Property<std::string> hostName_prop;// port to listen on
RTT::Property<int> hostPort_prop;// timeout in seconds, when waiting for connection
RTT::Property<int> connectionTimeout_prop;// timeout in seconds, when waiting to read
RTT::Property<int> readTimeout_prop;public:
SimpleNonPeriodicClient(std::string name);virtual ~SimpleNonPeriodicClient();protected:/// reset count and lastRead, attempt to connect to remotevirtualbool startHook();/// attempt to read and process one packetvirtualvoid updateHook();/// close the socket and cleanupvirtualvoid stopHook();/// cause updateHook() to returnvirtualbool breakUpdateHook();/// Socket used to connect to remote host
QTcpSocket* socket;/// Flag indicating to updateHook() that we want to quitbool quit;};

The component has a series of properties specifying the remote host and port to connect to, as well as timeout parameters. It also uses an RTT Attribute to count the number of successful reads that have occurred, and stores the last read data as a string in a RTT data port.

Component implementation

The class definition is included as well as the RTT logger, and importantly, the OCL component loader that turns this class into a deployable componet in a shared library.

Most importantly, all Qt related headers come after all Orocos headers. This is required as Qt redefines certain words (eg "slot", "emit") which when used in our or Orocos code cause compilation errors.

The constuctor simply sets up the data interface elements (ie the port, attribute and properties), and gives them appropriate initial values. Note that some of these initial values are illegal, which would aid in any validation code in a configureHook() (which has not been done in this example).

SimpleNonPeriodicClient::~SimpleNonPeriodicClient(){delete socket;}

The destructor cleans up by deleting the socket we allocated in the constructor.

The updateHook() function attempts to wait until data is available, and then reads the data BUFSIZE characters at a time. If it times out waiting for data, then it errors out and disconnects the port. This is not a robust approach and a real algorithm would deal with this differently.

As data may be continually arriving and/or we get more than BUFSIZE characters at a time, the while loop may iterate several times. The quit flag will indicate if the user wants to stop the component, and that we should stop reading characters.

Of particular note is the last line

engine()->getActivity()->trigger();

This causes updateHook() to be called again immediately by the execution engine. Essentially, this makes the non-periodic component act as a periodic component with a varying period. Of course, this is not called if the component is being stopped (ie quit==true).

The breakUpdateHook() is very important, as it is the only way to inform a blocked updateHook() that it is time to return and quit. In this example we set the quit flag and return true. The quit flag will be picked up by updateHook() when it finishes waiting for data (in socket->waitForReadyRead()). Returning true from breakUpdateHook() tells the execution engine that we successfully told updateHook() to return and that it should wait (one second, hardcoded) for updateHook() to complete and return. If we returned false, then stop would also return false.

We could have also done something like socket->abort() to forcibly terminate any blocked socket->waitForReadyRead() calls.

When using system calls (e.g. read() ) instead of Qt classes you could attempt to send a signal to interrupt the system call, however, this might not have the desired effect when the component is deployed ... the reader is advised to be careful here.

ORO_CREATE_COMPONENT(SimpleNonPeriodicClient)

This line of code creates a deployable component for the SimpleNonPeriodicClient) class, that the deployer can load from a shared library.

Using XML substitution to manage complex deployments

Rationale

Problem: You deploy multiple configurations of your system, perhaps choosing between a real and simulated robot, some real and simulated device, etc. You want to parameterize the deployments to reduce the number of files you have to write for the varying configuration combinations

Solution: Use the XML ENTITY element.

Assumptions

Works with Xerces only (v2 tested, v3 should also support this). Will not work with the default TinyXML processor.

Compatabilitry

Files

See the attachments at the bottom of this page.

Approach

This simple example demonstrates how to deploy a tiny system in two configurations, by simply changing the name of the deployed component. This approach can be (and has been) used to manage deployments with many system configurations.

There is a top-level file per configuration, which specifies all the parameters. Each top-level file then includes a child file which instantiates components, etc.

The internal entity values are used to substitute component names, and other basic parameters. The external entity value (&FILE_NAME) is used to include child files, so that the entity values defined in the top-level file are available within the child file. Using the Orocos' builtin include statement does not make the top-level entity values available within the child file.

The child file simply substitutes the two internal entities for a library name, and a component name.

You can use relative paths within the external entity filename. I have had inconsistent success with this - sometimes the relative path is needed, and other times it isn't. I think that it only needs the path relative to the file being included from, so if that file is already loaded on a relative path then you need to specify the child file only relative to the parent file, and not the current working directory that you started the deployment in.

This page collects notes and issues on the use of real-time logging. Its contents will eventually become the documentation for this feature.

This feature has been integrated in the Orocos 1.x and 2.x branches but is still considered under development. However, if you need a real-time logging infrastructure (ie text messages to users), this is exactly where you need to be. If you need a real-time data stream logging of ports, use the OCL's Reporting NetCDFReporting Component instead.

It is noted in the text where Orocos 1.x and 2.x differ.

Restrictions and issues

Restrictions

Startup the logging components first: Logging events prior to logging service's configure() will be dropped. The problem is that the logging service connects categories and appenders, and is it itself a component. So until it is configured, and the connections are all made, no appenders are available to deal with the event. Therefore you are suggested to put your appender components and the logging service in a separate deployment XML or script file which is loaded first. This will allow your application components to use logging from the start (component creation). See the ocl/logging/tests/data availablility XML deployment files for examples. OCL's deployer can execute in order multiple XML or script files.

Categories can not be created in real-time: They live on the normal heap via new/delete. Create all categories in your component's constructor or during configureHook() or similar.

NDC's are not supported. They involve std::string and std::vector which we currently can't replace.

Works only with OCL's deployers: If you use a non-deployer mechanism to bring up your system, you will need to add code to ensure that the log4cpp framework creates our OCL::Category objects, and not the default (non-real-time) log4cpp::Category objects. This should be done early in your application, prior to any components and categories being created.

Issues

On the ML it was requested to log when events have been lost. There are two places that this would need to be implemented, both annotated with TODO's in the code.

When creation of the OCL::String objects in a LoggingEvent exhausts the memory pool

When the buffer between a category and its appenders is full

This is not currently dealt with, but could be in future implementations.

In RTT/OCL 1.x, multiple appenders connected to the same category will, receive only some of the incoming logging events. This is as each appender will pop different elements from the category's buffer. This issue has been solved in 2.x.

The size of the buffer between a category and its appenders is currently fixed (see ocl/logging/Category.cpp). This will be fixed lateron on the 2.x branch. Note that that fixed size plus the default consumption rate of the FileAppender means you can exhaust the default TLSF memory pool in very short order. For a complex application (~40 components, 400 Hz cycle rate) we increased the default buffer size to 200, increased the memory pool to 10's of kilobytes (or megabytes) and increased the FileAppender consumption rate to 500 messages per second.

Viewing logs

We can use standard log viewers for Log4j in two ways:

Use FileAppender which writes log lines to a file and let the viewers read that file

Use Log4cxxAppender which creates a network socket to which Log4cxx/Log4j viewers can connect.

The deployer now defaults to a 20k real-time memory pool (see OCL CMake option ORO_DEFAULT_RTALLOC_SIZE), all Orocos RTT::Logger calls end up inside of log4cpp, and the default for RTT::Logger logging events is to log to a file "orocos.log". Same as always. But now you can configure all logging in one place!

IMPORTANT Be aware that there are two logging hierarchies at work here:

a non-real-time, log4cpp-based logging in use by RTT::Logger (currently only for RTT 1.x)

a real-time, OCL::Logging-based (with log4cpp underneath) in use by application code

In time, hopefully these two will evolve into just the latter.

Required Build flags

We're assuming here that you used 'orocreate-pkg' to setup a new application. So you're using the UseOrocos CMake macros.

Your application's manifest.xml must depend on ocl.

Your application's CMakeLists.txt must include the line : orocos_use_package(ocl-logging)

Both steps will make sure that your libraries link with the Orocos logging libraries and that include files are found.

See note at top of file regarding TLSF's bookkeeping overhead. The pool needs to be larger than that value.

Configuring RTT::Logger logging

NOTE: this feature is not available on the official release. Skip to the next section (Configuring OCL::logging) if you're not using the log4cpp branch of the RTT

You can use any of log4cpp's configurator approaches to configure, but the deployer's already know about PropertyConfigurator's. You can pass a log4cpp property file to the deployer and that will be used to configure the first of the hierarchies above - the non-real-time, logging used by RTT::Logger. For example

deployer-macosx --rtt-log4cpp-config-file /z/l/log4cpp.conf

where the file /z/l/log4cpp.conf is something like

# root category logs to application (this level is also the default for all # categories who's level is NOT explicitly set in this file)
log4j.rootCategory=DEBUG, applicationAppender
# orocos setup
log4j.category.org.orocos.rtt=INFO, orocosAppender
log4j.additivity.org.orocos.rtt=false# do not also log to parent categories
log4j.appender.orocosAppender=org.apache.log4j.FileAppender
log4j.appender.orocosAppender.fileName=orocos-log4cpp.log
log4j.appender.orocosAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.orocosAppender.layout.ConversionPattern=%d{%Y%m%dT%T.%l}[%-5p]%m%n

This configuration file simply changes the output filename and format. You could also add additional appenders (e.g. to stdout, to socket) and change the logging level for sub-categories, if RTT supported them (e.g. scripting.rtt.orocos.org).

IMPORTANT Note the direction of the category name, from org to rtt. This is specific to log4cpp and other log4j-style frameworks. Using a category "rtt.orocos.org" and sub-category "scripting.rtt.orocos.org" won't do what you, nor log4cpp, expect.

Configuring OCL::logging (XML)

This is how you would setup logging from a Deployer XML file. If you prefer to use a script, see the next section.

See ocl/logging/tests/xxx.xml for complete examples and more detail, but in short

IMPORTANT YOu must dynamic_cast to an OCL::logging::Category* to get the logger, as shown in the constructor above. Failure to do this can lead to trouble. You must also use explicitly use OCL::String() syntax when logging. Failure to do this produces compiler errors, as otherwise the system defaults to std::string and then you are no longer real-time. See the FAQ below for more description.

The last one is the most interesting. All RTT::Logger calls have been sent to the same appender as the application logs to. This means you can use the exact same logging statements in both your components (when they use OCL::Logging) and in your GUI code (when they use log4cpp directly). Less maintenance, less hassle, only one (more) tool to learn. The configuration file for the last example looks something like

# root category logs to application (this level is also the default for all # categories who's level is NOT explicitly set in this file)
log4j.rootCategory=DEBUG, applicationAppender
# orocos setup
log4j.category.org.orocos.rtt=INFO, applicationAppender
log4j.additivity.org.orocos.rtt=false# do not also log to parent categories# application setup
log4j.category.org.me=INFO, applicationAppender
log4j.additivity.org.me=false# do not also log to parent categories
log4j.category.org.me.gui=WARN
log4j.category.org.me.gui.Robot=DEBUG
log4j.category.org.me.gui.MainWindow=INFO
log4j.appender.applicationAppender=org.apache.log4j.FileAppender
log4j.appender.applicationAppender.fileName=application.log
log4j.appender.applicationAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.applicationAppender.layout.ConversionPattern=%d{%Y%m%dT%T.%l}[%-5p]%c %m%n

Technical details

There is a several kilobyte overhead for TLSF's bookkeeping (~3k on 32-bit Ubuntu, ~6k on 64-bit Snow Leopard). You must take this into account, although the standard OCL TLSF pool size (256k) should cover your needs.

Only the OCL::String (in 1.x) and RTT::rt_string (in 2.x) objects in OCL::logging::LoggingEvent objects use the real-time memory pool.

When you create a category, all parent categories up to the root are created. For example, "org.me.myapp.cat1" causes creation of five (5) categories: "org.me.myapp.cat1", "org.me.myapp", "org.me", "org", and "" (the root category) (presuming none of these already exist). These all occur on the normal heap (see below).

For real-time performance, ensure that TLSF is built with MMAP and SBRK support OFF in RTT's CMake options (-DOS_RT_MALLOC_MMAP=OFF -DOS_RT_MALLOC_SBRK=OFF).

TLSF use with multiple threads is currently supported only for non-macosx platforms. Use on macosx will exhibit (understandable) corruption in the TLSF bookkeeping (causes assert's).

FAQ

Logging statements are not recorded

Q: You are logging and everything seems fine, but you get no output to file/socket/stdout (depending on what your appender is).

A: Make sure you are using an OCL::logging::Category* and not a log4cpp::Category. The latter will silently compile and run, but it will discard all logging statements. This situation can also mask the fact that you are accidentally using std::string and not OCL::String. For example

and the code compiles and runs, and now logging statements are recorded.

omniORBpy - python binding for omniORB

This page describes a working example of using omniORBpy to interact with an Orocos component. The example is very simple, and is intended for people who do not know where to start developing a CORBA client.

if this work (you see a line like: 0.011 [ Info ][SmallNetwork] ControlTask 'ComponentA' found CORBA Naming Service.) then you need to modify the paremeter InitRef in your omniORB4.cfg (or similar, which is usually in /etc/) and make it read like:

InitRef=NameService=corbaname::127.0.0.1

finally run the python application

python orocosclient.py

If you are not able to make your naming service work, try using the component's IOR. After running you smallnet server, copy the complete IOR printed on screen and paste it as argument to the python program (including the word "IOR:")

python orocosclient.py IOR:0...10100

Look at the IDLs and the code to understand how things work. I am no python expert, so if the coding style looks weird to you, my apologies. Good luck!

Using CORBA

Outlines how to use CORBA to distribute applications. Differs by CORBA implementation and whether you are using DNS names or IP addresses. Examples below support the ACE/TAO and OmniORB CORBA implementations.

Sample system:

Deploying components in demo.xml with deployer-corba, on machine1.me.home with IP address 192.168.12.132

Running a GUI program demogui to connect to deployer components, on machine2.me.home with IP address 192.168.12.133

Use a name server without multi-casting[1], on machine1.me.home.

Using a bash shell.

Both machines are gnulinux (though this has been verified with macosx, and mixing macosx and gnulinux)

Working DNS

If you have working forward and reverse DNS entries (ie dig machine1.me.home returns 192.168.12.132, and dig -x 192.168.12.132 returns machine1.me.home)

Localhost

Certain distro's and certain CORBA versions will exhibit problems even with localhost only scenarios (demonstrated with OmniORB under Ubuntu Jackaloupe). If you can not connect to the name service running on the same machine, substitue the primary network interface's IP address for localhost in any NameService value.

NB as of RTT v1.8.2 and OmniORB v4.1.0, programs like demogui (which use RTT::ControlTaskProxy::InitOrb() to initialize CORBA) do not support -ORBDottedDecimalAddresses (in case you try to use it).

Multi-homed machines

Computers that have multiple network interfaces present additional problems. The following is for omniORB (verified with a mix of v4.1.3 on Mac OS X, andv v4.1.1 on Ubuntu Hardy), for a system running a name server, a deployer, and a GUI. The example system has a 192.168.1.0 wired subnet and a 10.0.10.0 wireless subnet, and you have a mobile vehicle that has to communicate over the wireless subnet but it also has a wired interface.

The problem may appear as one of

The vehicle can not contact the name server when the wired interface is disconnected but it is up (NB on rare occasions, we've seen this even with the wired interface disconnected and down!)

Your GUI can connect to the deployer, but then locks up or throws a CORBA exception when trying to connect to certain remote Orocos items (we had this happen specifically for methods with parameters).

The solution is to forcibly specify the endPoint parameter to the name server. In the omniorb.cfg file on the computer running the name server, add (for the example networks above)

endPoint = giop:tcp:10.0.10.14:

where 10.0.10.14 is the IP adrs of that computer. This forces the name server to publish end points on the wireless network first. Despite this, it will still publish the wired interface but it will come after the wireless. Specifying the endPoint parameter on the command line (instead of the config file) will not work, as then the name sever publishes the wired network first, and then the wireless network second.

If the above still does not work, then set the endPoint parameter in all computer's config files (note that the end point is the IP adrs of each computer, so it will be (say) 10.0.10.14 for the computer running the name server and the deployer, and (say) 10.0.10.21 for the computer running the GUI). This will force everyone onto the wireless network, instead of relying on what the name server is publishing.

To debug this problem see the debugging section below, but after starting the name server you will see it output its published endpoints (right after the configuration dump). Also, if you get the lockup then adding the debug settings will cause the GUI or deployer to output each message and what direction/IP it is going on. If they have strayed on to the wired network it will be visibly obvious.

NB we found that the clientTransportRule and serverTransportRule parameters had no affect on this problem.

NB the above solution works now matter which computer the name server is running on (ie with the deployer, or with the GUI).

You can install Orocos for additional targets (or versions) by replacing gnulinux1.8 by another target name (or version). All target libraries can be installed at the same time, the -dev header files only for a single target and version at a time.

For your application development, you'll most likely use the Orocos Component library as well:

Installing via Macports on Mac OS X

These are instructions to install the latest version of each of RTT, KDL, BFL and OCL, on Mac OS X using Macports.

Macports does not have official ports for these Orocos projects, however, the approach below is the recommended way to load unofficial ports in to Macports. [1]

Installation

These instructions use /opt/myports to hold the Orocos port files. You can substitute any other directory for MYPORTDIR (ie /opt/myports). Instructions are for bash shell - change appropriately for your own shell.

1. Download the Portfile files from this page's Attachments (at bottom of page).

2. Execute the following commands (substituting /opt/myports for the location you wish to store the Orocos port files, and ~/Downloads for the directory you downloaded the portfiles to)

6. Verify installation by downloading test-macports.xml from this page's Attachments, and then using these commands

deployer-macosx -s /path/to/test-macports.xml

This should succesfully load and start the OCL HelloWorld component within the taskbrowser. You may need to specify the paths to the dynamic libraries, for this to work

export DYLD_FALLBACK_LIBRARY_PATH=/opt/local/lib

Yes, it is DYLD_FALLBACK_LIBRARY_PATH and not DYLD_LIBRARY_PATH. Search the forum if you want to know why ...

Building your application

To build against MacPorts-installed Orocos, add the following to your environment before CMake'ing your project

export CMAKE_PREFIX_PATH=/opt/local

If you already have CMAKE_PREFIX_PATH set, then append "/opt/local" to your existing entry.

If you use Makefiles or autoconf to build your project, you'll need to tell those build systems to find Orocos headers, libraries and binaries under /opt/local. Instructions are not provided here for that.

To run using MacPorts-installed OROCOS, add the following to your environment

Attribute

You can alter the attributes of any task, program or state machine. The TaskBrowser will confirm validity of the assignment with 'true' or 'false'

Command

Commands are 'sent' by other components to instruct the receiver to 'reach a goal'

When a command is entered, it is sent to the component, which will execute it in its own thread on behalf of the sender. The different stages of its lifetime are displayed by the prompt. Hitting enter will refresh the status line.

A Command might be rejected (return false) in case it received invalid arguments.

A command has a designated reciever.

A command cannot, in general, be completely executed instantaneously, so the caller should not block and wait for its completion.

But the Command object offers all functionalities to let the caller know about the progress in the execution of the command.

Commands are used for actions taking time and setpoints.

Component

Components are implemented by the TaskContext class.

It is useful speaking of a context because it defines the context in which an activity (a program) operates. It defines the interface of the component, its properties, its peer components and uses its ExecutionEngine to execute its programs and to process commands and events.

A task's interface consists of members.

Data-Flow Ports

Data-Flow Ports are a thread-safe data transport mechanism to communicate buffered or un-buffered data between components.

When a value is Set(), it is sent to whatever is connected to that port. Use Get() to read the port.

The advantage of using ports is that they are completely thread-safe for reading and writing, without requiring user code.

Events

Events are related to commands, but allow broadcasting of data, while a command has a designated receiver.

Events allows functions to be executed when a change in the system occurs.

eg. alarms, publishing state changes

Members

Members are: Commands, Methods, Ports, Attributes and Properties and Events, which are all public.

Method

Methods are used for algorithms and complex configurations.

Methods are callable by other components to 'calculate' a result immediately, just like a 'C' function.

Peer

The peers of a component are the components which are known, and may be used, by this component.

Property

Properties are meant for persistent configuration and can be written to disk.

Properties are run-time modifiable parameters, stored in XML files..

RTT on MS Windows

This page collects all the documentation users collected for building and using RTT on Windows. Note that Native Windows support is available from RTT 1.10.0 on, and that you might no longer need some of the proposed workarounds (such as using mingw or cygwin).

Linking and Compiling an application

I managed to create DEF files, and use Microsofts LIB tool to turn the library it into something MSVC likes.

I'm no CMake expert, and don't have the time to learn **another** build scripting language, however I created the CMake files in the usual way, built RTT and ensure it compiled cleanly. I hacked the created makefiles by a search of my source tree for "--out-implib" and found that link.txt that lives in build\src\CMakeFiles\orocos-rtt-dynamic_win32.dir had that string in it. So I added the --output-def,..\..\libs\liborocos-rtt-win32.dll.def, to create the def file, and rebuilt RTT, this created the DEF file, I than ran it through the Microsoft LIB tool as described.

I then created a MSVC project, added the library to my linker settings, and made a very simply MSVC console application:

Hopefully I am now at a stage when I can actually start to evaluate RTT :-) If anyone has any ideas on how to properly get the CMakeList.txt to generate the DEF files without nasty post-CMake hacks, then I would love to hear it...

This page summarizes how to compile RTT with Microsoft Visual Studio, using the native win32 api. RTT supports Windows out of the box from RTT 1.10.0 and 2.3.0 on. OCL is supported from 1.12.0 and 2.3.0 on.

This tutorial assumes you extracted the Orocos sources and all its dependencies in c:\orocos

For new users, RTT/OCL v2.3.x or later is recommended, included in the Orocos Toolchain v2.3.x.

Rationale

We only support Visual Studio 2008 and 2005. Support for 2010 is on its way. You're invited to try VS2010 out and suggest patches to the orocos-dev mailing list.

Orocos does not come with a Visual Studio Solution. You need to generate one using the CMake tool which you can download from http://www.cmake.org. The most important step for CMake is to set the paths to where the dependencies of Orocos are installed. So before you can get to building Orocos, you need to build its dependencies, which don't use CMake, but their own build mechanism.

Only RTT and OCL of the toolchain are supported on Windows. The ruby based 'orogen' and 'typegen' tools, part of the toolchain, are not supported. Also ROS integration is not supported on Windows.

Important notice about Release or Debug

Debug and Release builds can not be mixed in Visual Studio's C++ compiler (you will have crashes when mixing a Debug and Release DLL that has a C++ API). By convention, a Debug .DLL can be recognized because it ends with ....d.dll. We recommend that you do Release builds when evaluating the Orocos toolchain and on production systems. Debug builds are considerably larger than Release builds.

RTT Dependencies

There are two major libraries required by RTT: Boost C++ and a CORBA transport library (if you require one).

CORBA using ACE/TAO (optional)

In case you require distributed Orocos components, you need to setup ACE/TAO, which does the work under the hood for RTT. Download the latest TAO version, extract it and open the solution (ACE_wrappers/TAO/TAO_ACE_vc8.sln) file with Visual Studio. Build the 'Naming_Service_vc8' project, and make sure that you choose the configuration (Debug/Release) that fits your purpose. The Naming_Service project builds automatically the right components we need to build RTT. Check the TAO build instructions in case you encounter problems.

You must have this set as system environment variables:

set ACE_ROOT=c:\orocos\ACE_wrappers
set TAO_ROOT=%ACE_ROOT%\tao
set PATH=%PATH%;%ACE_ROOT%\bin;%ACE_ROOT%\lib

You can also set these using Configuration -> System -> Advanced -> Environment Variables

CMake (required)

Download and install cmake (http://www.cmake.org). We're going to use cmake-gui.exe to configure our build system. Use CMake version 2.6.3 or newer.

XML Parser

RTT will use its internal 'tinyxml' parser on Windows. No need to install anything for this.

Setting up CMake

First you need to add two 'PATH' entries for telling cmake where Boost and TAO are installed. In the top RTT directory, there is a file named orocos-rtt.default.cmake. Copy it to orocos-rtt.cmake (in the same directory) and add these two lines:

Start the cmake-gui and set your source and build paths ( For example, c:\orocos\orocos-rtt-1.10.0 and c:\orocos\orocos-rtt-1.10.0\build ). Now click 'Configure' at the bottom. Check that there are no errors. If components are missing, you probably need to fix the above PATHs.

For RTT 1.12, turn OS_NO_ASM ON, for RTT 2.3.0, turn OS_NO_ASM OFF.

You probably need to click Configure again and then click 'Generate', which will generate your Visual Studio solution and project files in the 'build' directory.

Open the generated solution in MSVS and build the 'ALL_BUILD' target, which will build the RTT (and the unit tests if you enabled them).

Unit tests (Optional)

In order to enable unit tests, you need to turn on BUILD_TESTING in the CMake GUI.

The unit tests will fail if the required DLLs are not in your path. In your system settings, or on the command prompt of Windows, add c:\orocos\boost_1_40\lib and c:\orocos\ACE_wrappers\lib to your PATH environment (reboot if necessary).

Next, run a 'make install' and add the c:\orocos\bin directory to your PATH (or whatever you used as install path.) In RTT 2.3.0, the default install path is c:\Program Files\orocos (so add c:\Program Files\orocos\bin to PATH). It is recommended to keep this default, since OCL uses that too.

Now you should be able to run the unit tests. The process could be a bit streamlined more and may be improved in later releases.

Installing RTT

Once everything is build, you can use the 'INSTALL' project to copy the files into the correct installation directories. This is necessary for OCL and the other Orocos applications such that they find headers and libraries in the expected locations.

Building OCL

Building OCL on Windows follows a similar path as RTT. Start CMake and point it to your OCL source tree and create a 'build' directory in there.

OCL Dependencies

Also OCL needs to know where Boost, TAO and other dependencies are installed. There's again an orocos-ocl.default.cmake file which you can copy to orocos-ocl.cmake.

Compiling the RTT on Windows/cygwin

This page describes all steps you need to take in order to compile the real-time toolkit on a Windows machine. We rely on the Cygwin libraries and tools to accomplish this. Visual Studio or mingw32 are as of this writing not yet supported. Also CORBA is not yet ported.

Download and install Cygwin

You can get it from http://www.cygwin.com and use the setup.exe script. Make sure you additionally install:

where to put the code and do release management ? (trying out gitorious.org ?)

use standard approach for end user build-support files across all sub-projects (ie if CMake, then use the same approach to FindXXX and Config files). Provide example use cases to build against Orocos (ie there is no FindOrocos-XXX.cmake provided by any sub-project, which IMHO is Very Bad for a new user).

the components: Orocos has the OCL and DFKI has already open-sourced some components. In what form do we want to distribute them ?

the toolchain: where are we going, what are we going to use and lay out a schedule for that.

idea from Peter to standardize the type description on the ROS datatypes vs. oroGen's C++ parser

Integration of dependant packages (e.g. TLSF). Currently (for real-time logging), we have a circular build problem in that RTT needs log4cpp, log4cpp needs TLSF, but TLSF is installed as part of RTT. Big Problem! Peter mentioned integrating log4cpp, but I'm not sure taht this is the best approach (ie long term consequences, keeping up with releases, scalability).

Integration of real-time logging from OCL into RTT

Use of real-time logging by RTT itself. Transition plan to accomplish this.

OperationInterface (no: Interface is used a lot in the code for base classes)

Misc

chicken and egg problem with the deployer, especially with basic services like real-time logging

Plugins

possibility to do autoloading based on a PATH environment variable or to do loading per-plugin

there is a cost to load a typekit that is not needed as the shared library has to be mapped in memory and typekits are quite big

Code size

instanciating RTT::Operation for void()(void)

60kB for dispatching code

60kB for distributed C++ dispatching code

Yuk

Events

2.0 does not have events right now

can (and will) be implemented on top of the ports

one limitation: only one argument. Not a big issue if we have an automated wrapping like oroGen

we're now getting into the details of event handling ;-)

end

B. Second day

The discussions starts with explaining the improved TypeInfo infrastructure:

Normally , everything should be generated by the tools

If the tools don't make it, you can generate a typekit manually by:

Add a StructTypeInfo<T> instead of TemplateTypeInfo<T> (the latter still exists)

Define a boost::serialization function the decomposes your struct

ROS messages and orogen

Can orogen parse a generated ros message class ?

It can't since it does not work when a class has virtual functions. Also the ANTLR parser is not 'good enough'.

gccxml tool can help here, it also removes ANTLR then.

Sylvain explains how orogen works

List dependencies

Declare used types (header files to use)

Declare task definitions

Declare deployments

Sylvain shows how orogen requires the #ifndef __orogen in the headers listed. gccxml is a fix for this too.

Hosting on gitorious is being discussed. It allows us to group code in 'projects' and collaborate better using git.

Autoproj is discussed as a tool to bootstrap the orocos packages. It's an alternative to manually download and build everything. It may work in parallel with rosbuild, in case application software depends on both ros and orocos. This needs to be tested.

The work is divided for the rest of the day:

Charles + Peter : Yarp transport for 2.0

Markus + Peter : Find collect segfault bug

Stephen + Sylvain: Mac OS-X testing of autoproj/ruby etc.

Sylvain + Peter + Markus : gccxml into orogen

We decided to rename orogen to typegen

The day concluded with investigating the code size/compile time issue. The culprits are the operations added to the ports in the typekit code. We investigated several solutions to tackle this, especially in the light of code/typekit generation.

C. Third day

The day started with a re-evaluation of the agenda and release timelines. The proposed release date for 2.0 was august 16th.

This list of topics will be covered this week:

Documentation review

Website review

Real-time Logging

Build system review

Crash in Collect found by Markus

Yarp transport

This list of issues will be solved before 2.0.0:

Code size/compilation time issue

Tool + cmake macros to create new component projects

typegen tool to generate type kits

gitorious migration of all orocos projects, including code generation tools

OCL cleanup and migration to new cmake macros

These issues will be delayed after 2.0.0:

Thread-safe property writing (allow a peer/3rd party to change a property)

Attribute/Property resolution, ie, maybe it's easier to introduce a 'persistent' flag in properties which flags if it needs to be serialized or not. Attributes can then be removed.

Service discovery. Sylvain manages these things in a supervision layer written in Ruby. It's not clear yet how far the C++ DeploymentComponent needs to go in this issue.

Connection browsing: ask a port to which other ports it is connected such that we can visualise that

Deployment gui to show or create component networks

Full Mac-OS-X and Win32 support. These will mature in the 2.0.x releases as users on these platforms test the release.

The rest of the day continued as planned on the agenda. In the morning, a new CMake build system for components, plugins and executables was created to maximize maintainability and ease-of-use of creating new Orocos software. OCL too will switch to this system. The interface (CMake macros) and logic behind it was discussed. This tool will be further developed to be ready before the 2.0 release.

In the afternoon, the documentation and website structure was discussed. We came to the conclusion that no-one only downloads the RTT. For 2.0, they will download, RTT, the infrastructure components (TaskBrowser, Deployment, Reporting, Diagnostics etc) and the tool-chain (typekit generation, component generation etc.). This will require a restructuring of the website and the documentation, to no longer be RTT-centric, but to be 'Orocos ecosystem' centric.

The documentation will contain 3 pillars:

Getting started

Download toolchain

Build

Run demo

Setting up a real system

Your first component

Deploying it

Creating a component network

Reference documentation

API

Cheat-Sheet

Manuals

The reference manuals will be cleaned up too, such that they suit better 'for reference' and less serve as 'first read for new users'.

During this day, the code size problem, typegen development and Yarp transport were also further polished.

It ended with a visit to 'Parc Guell' and a walk to the old city centre, where we enjoyed a well deserved tapas meal.

See detailed instructions in URL's above and below, but basically (unless otherwise noted, all actions are in MSys Unix shell, and, all unix-built items are installed in /mingw (which is c:\msys\1.0\mingw in DOS prompt) )

Set your PATH

Test your setup

Next test your setup with a 'make check'. Currently 4 of 8 tests fail ... more work to do here.

Partial ACE/TAO CORBA build

This gets most of ACE/TAO to build, but not yet all.

download, follow MinGW build instructions on the website.
add "#undef ACE_LACKS_USECONDS_T" to ace/config-win32-mingw.h" before compiling
copy ace/libACE.dll to /mingw/lib
make TAO ** this fails
You can build all we need by manually doing ''make'' in the following directories. Note that the last couple of TAO dir's have problems.
ace, ace/protocols, kokyu, tao, tao/TAO_IDL, tao/orbsvcs

Make LoggingService support lookup of ports by category (called via operation to do so)

Support multiple appenders per category

Either logging messages go to stderr if appender not yet connected to category, or they continue to get discarded

Deployer by default starts LoggingService and FileAppender (to orocos.log). User can turn this behaviour off with a command line parameter, allowing them to configure the logging system via a site deployment file.

Add streaming capability : logger->debug << xyz;

Replace RTT::Logger with calls to RTT::Logging framework

Complete OCL::String plugin to support use within scripting

Add LoggingPlugin

support use from scripting to query, modify and use OCL::Category

Add additional appenders (eg socket)

Services discussion

Peter explains how services made their entry into the design and how they can be used.

Services have to have different names from ports (v2)

TaskContext has a default service (this->provides())

TC is really a service container/executor.

Properties and operations must be in a service

Ports were _not_ in a service. This will be changed such that ports belong to a Service. A Provides Service can have both input and output ports. This is reasonable and meets real-world semantics, however, it does sound slightly contradictory. Must be well explained with examples.

Talking of dropping the “Providers” in “ServiceProviders”, and just having “Services” and “ServiceRequestors”

E. Fifth day

It's hacking day and implementing/finishing most of what we started this week.

Stephen is testing on Mac-OS-X. Found a bug in tlsf where NULL and 0 were mixed, causing it not to handle memory exhaustion cases correctly.

Peter makes the API changes that were proposed and fixes bugs others find on the go.

Sylvain is setting up the gitorious project

The Road to RTT 2.0

This Chapter collects all information about the migration to RTT 2.0. Nothing here is final, it's a scratch book to get us there. There are talk pages to discuss the contents of these pages.

These are the major work areas:

New Data Flow API, proposed by S. Joyeux

Streamlined Execution Flow API, proposed by P. Soetens (RTT::Message)

Full distribution support and cleanup (Events in CORBA)

Alternative Data Flow transport layer (non blocking).

Small tools for interacting with Components

If you want to contribute, you can post your comments in the following wiki pages. It will be (hopefully) more concise and straightforward compared with the developers Forum.

Which weakness have you detected in RTT?

Which features would you like to have in RTT 2.0?

These items are worked out on separate Wiki pages.

RTT and OCL 2.0 have been merged on the master branches of all official git repositories:

The sections below formulate the major goals which RTT 2.0 wishes to attain.

Simplicity

The Real-Time Toolkit shouldn't be in the way of building complex applications, instead it should help making it easier. We're improving on different fronts to make it more simple to use for both beginners and experienced power users.

API: user oriented

The API is clearly separated in what public (rtt user) and private (rtt internal) APIs are. The number of concepts are reduced and a sane default is chosen where alternatives are possible. Policies allow users to deviate from the default behavior.

Tooling: enhancing the experience

The RTT is a very extensible library. When users require an extension, they don't need to write much or any additional code. Tools assist in generating helper libraries for adding user types (type plugins) or user interfaces (service plugins) to the RTT. The generated code is readable and understandable and documented. If required, these can be overriden by hand-written code such that tools in development do not block user development.

Component model: components are simple

RTT 2.0 components are simple to understand and explain. In essence they are stateful input/output systems that offer services to supervisors.

The input/output is offered by means of port based communication between data processing algorithms. An input port receives data, an output port sends data. The algorithms in the component define the transformation from input to output.

Service based communication offers operations such as configuration or task execution. A component always specifies if a service is provided or requested. This allows run-time dependency and system state checking, but also automatic connection/disconnection management which is important in distributed environments.

Components are stateful. They don't just start processing data right away. They can validate their preconditions, be queried for their current state and be started and stopped in a controlled manner. Although there is a standard state machine in each component that regulates these transitions, users can extend these without limitations.

Acceptable Upgrade Path

The first users of RTT 2.0 will be current users, seeing solutions for problems they have today. The upgrade path will be documented and assistive tools will be provided. Whenever possible, backwards compatibility is maintained.

Interoperability

The field knows a number of succesful robotics frameworks, languages and operating systems. RTT 2.0 is designed to allow bridges to these components.

Other frameworks

RTT 2.0 can easily interoperate with other robotics frameworks that provide the concepts of port based data flow communication and functional services.

Other languages

RTT 2.0 offers the 1.x real-time scripting language, but in addition binds to other languages as well. A real-time language binding to Lua is offered. Not real-time bindings are offered over a CORBA language independent interface.

Other operating systems

RTT 2.0 runs on Linux, RTAI, Xenomai, Mac OS-X and Windows. These are the main operating systems of the current advanced robotics domain.

Robustness

Complex systems are hard to startup, shutdown or to recover from disfunctional components. RTT 2.0 aids the system architect in maintaining a robust machine controller, even in distributed setups.

Service oriented architectures

Components are aware of the available services and have chance to execute fall-back scenarios when they disappear. They are notified in time such that they can take proper action and recover and resume when a service becomes available again. Local and global supervisors keep track of these state changes such that such mechanisms do not need to be hard-coded into each component.

Separation between real-time and not real-time processes

A real-time component can not be disturbed due to the addition of a lower priority communicating peer. This allows to build systems incrementally around a hard-realtime core. The RTT decouples the communication between sender and receiver and allows real-time data transports to assure delivery.

Contribute! Which weakness have you detected in RTT?

INTRODUCTION

You can edit this page to post your contribution to OrocosRTT 2.0. Please, keep your comment concise and clear: if you want to launch a long debate, you can still use the Developers Forum! Short examples can help other people understanding what you mean.

A) According to the section 4. of the Orocos Component Builder's Manual, the callback of a synchronous event is executed inside the thread of the event's emitter. Imagine that TaskA emits an event, and TaskB, who subscribes synchronously to it, has an handle with a infinite loop: the behavior of TaskA would be jeopardize. Keep in mind that:

TaskA hasn't any clue to know what will happen inside the callback of TaskB.

It can't prevent TaskB from connecting synchronously.

Once blocked, there is nothing it can do.

B) What would happen if a TaskContext is attached to a PeriodicActivity, but internally it was designed to run as a NonPeriodicActivity. What would happen if a sensor with a refresh rate of 10 Hz is read from a Component deployed at 1000 Hz? May be the Activity of the TC should be defined by the TC itself, even if this would mean to have it is hard-coded in the TC.
C) Because of single thread serialization, we can have that a sleep in 1 Task, affect other tasks which are not aware and are not responsible of it. See the source code in the sub page.

Problems with single thread serialization

Because of single thread serialization, something unexpected for the programmer happens.

1) You expect TaskA to be independent from TaskB, but it isn't. If you think it is a problem of resources of the computer, change the activity frequency of 1 of the two tasks.

Suggestion: A) let the programmer choose if single thread serialization is used or not. B) keep 1 thread for 1 activity policy for default. It will help less experienced user to avoid common errors. Experienced user can decide to "unleash" the power of STS if they want to.

2) after the "block" for 0.5 seconds, the "lost cycles" are executed all at once. In other words, updateHook is called 5 times in a row. This may have very umpredictable results. It could be desirable for some applications (filter with data buffer) or catastrophic in other applications (motion control loop).

Suggestion: C) let the user decide if the "lost cycles" or the PeriodicActivity need to be executed later or are defenitively lost.

usingnamespace std;usingnamespace RTT;usingnamespace Orocos;
TimeService::ticks _timestamp;double getTime(){return TimeService::Instance()->getSeconds(_timestamp);}class TaskA
:public TaskContext
{protected:
PeriodicActivity act1;public:
TaskA(std::string name): TaskContext(name),
act1(1, 0.10, this->engine()){//Start the component's activity:
this->start();}void updateHook(){printf("TaskA [%.2f] Loop\n", getTime());}};class TaskB
:public TaskContext
{protected:int num_cycles;
PeriodicActivity act2;public:
TaskB(std::string name): TaskContext(name),
act2(2, 0.10, this->engine()){
num_cycles =0;//Start the component's activity:
this->start();}void updateHook(){
num_cycles++;printf("TaskB [%.2f] Loop\n", getTime());// once every 20 cycles (2 seconds), a long calculation is doneif(num_cycles%20==0){printf("TaskB [%.2f] before calling long calculation\n", getTime());// calculation takes longer than expected (0.5 seconds). // it could be something "unexpected", desired or even a bug... // it would not be relevant for this example.for(int i=0; i<500; i++) usleep(1000);printf("TaskB [%.2f] after calling long calculation\n", getTime());}}};int ORO_main(int argc, char** argv){
TaskA tA("TaskA");
TaskB tB("TaskB");// notice: the task has not been connected. there isn't any relationship between them.// In the mind of the programmer, any of them is independent, because they have their own activity.// if one of the two frequency of the PeriodicActivities is changed, there isn't any problem, since they go on 2 separate threads. getchar();return0;}

Contribute! Suggest a new feature to be included in RTT 2.0.

INTRODUCTION

Please be concise and provide a short example and your motivation to include it in RTT. Ask first yourself:

"Am I the only beneficiary of this new feature?"

"Can this feature be obtained with a simple layer on the top of RTT ?"

If you answered "no" to both the questions and you have already debated the new future in the Developers forum, please post here your suggestion.

Create Reference Application Architectures

In order to lower the learning curve, people are requesting often complete application examples which demonstrate well known application architectures such as kinematic robot control, application configuration from a central database or topic based data flow topologies.

1 Central Property Service (ROS like) This tasks sets up components such that they get the system wide configuration from a dedicated property server. The property server loads an XML file with all the values and other components query these values. Advanced components even extend the property server at places. A GUI is not included in this work package.

2 Universal Robot Controller (Using KDL, OCL, standard components) This application has a robot component to represent the robot hardware, a controller for joint space and cartesian space and a path planner. Users can start from this reference application to control their own robotic platform. A GUI is not included in this work package.

3 Topic based data flow (ROS and CORBA EventService like) A deployer can configure components as such that their ports are connected to 'global' topics for sending and receiving. This is similar to what many existing frameworks do today and may demonstrate how compatibility with these frameworks can be accomplished.

4 GUI communication with Orocos How a remote GUI could connect to a running application.

Please add yours

Detailed Roadmap

These pages outline the roadmap for RTT-2.0 in 2009. We aim to have a release candidate by december 2009, with the release following in januari 2010.

A work package is divided in tasks with deliverables.

All deliverables are public and are made public without delay.

All development is done in git repositories.

For each change committed to the local git repository, that change is committed to a public repository hosted at github.com within 24 hours.

For each task and at the end of each work package, all unit tests are expected to pass. In case additional unit tests are required for a work package, these are listed explicitly as deliverables.

The order of execution of tasks within a work package is suggestive and may differ from the actual order.

When the form of a deliverable is 'Patch set', this is equivalent to one or more commits on the public git repository.

WP1 RTT Cleanup

This work package contains structural clean-ups for the RTT source code, such as CMake build system, portability and making the public interface slimmer and explicit. RTT 2.0 is an ideal mark point for doing such changes. Most of these reorganizations have broad support from the community. This package is put up front because it allows early adopters to switch only at the beginning to the new code structure and that all subsequent packages are executed in the new structure.

Links : (various posts on Orocos mailing lists)

Allocated Work : 15 days

Tasks:

1.1 Partition in name spaces and hide internal classes in subdirectories.

A namespace and directory partitioning will once and for all separate public RTT API from internal headers. This will provide a drastically reduced class count for users, while allowing developers to narrow backwards compatibility to only these classes. This offers also the opportunity to remove classes that are for internal use only but are in fact never used.

Deliverable

Title

Form

1.1.1

Internal headers are in subdirectories

Patch set

1.1.2

Internal classes are in nested namespaces of the RTT namespace

Patch set

1.2 Improve CMake build system

Numerous suggestions have been done on the mailing list for improving portability and building Orocos on non standard platforms.

Deliverable

Title

Form

1.2.1

Standardized on CMake 2.6

Patch set

1.2.2

Use CMake lists instead of strings

Patch set

1.2.3

No more use of Linux specific include paths

Patch set

1.2.4

Separate finding from using libraries for all RTT dependencies

Patch set

1.3 Group user contributed code in rtt/extras.

This directory offers variants of implementations found in the RTT, such as new data type support, specialized activity classes etc. In order not to clutter up the standard RTT API, these contributions are organized in a separate directory. Users are warned that these extras might not be of the same quality as native RTT classes.

Deliverable

Title

Form

1.3.1

Orocos rtt-extras directory

Directory in RTT

1.4 Improve portability

Some GNU/GCC/Linux specific constructs have entered the source code, which makes maintenance on and portability to other platforms a harder task. To structurally support other platforms, the code will be compiled with another compiler (non-gnu) and a build flag ORO_NO_ATOMICS (or similar) is added to exclude all compiler and assembler specific code and replace it with ISO-C/C++ or RTT-FOSI compliant constructs.

Deliverable

Title

Form

1.4.1

Code compiles on non-gnu compiler

Patch set

1.4.2

Code compiles without assembler constructs

Patch set

1.5 Default to activity with one thread per component

The idea is to provide each component with a robust default activity object which maps to exactly one thread. This thread can periodically execute or be non periodic. The user can switch between these modes at configuration or run-time.

Deliverable

Title

Form

1.5.1

Generic Activity class which is by default present in every component.

Patch set

1.5.2

Unit test for this class

Patch set

1.6 Standardize on Boost Unit Testing Framework

Before the other work packages are started, the RTT must standardize on a unit test framework. Until now, this is the CppUnit framework. The more portable and configurable Boost UTF has been chosen for unit testing of RTT 2.0.

Deliverable

Title

Form

1.6.1

CppUnit removed and Boost UTF in place

Patch set

1.7 Provide CMake macros for applications and components

When users want to build Orocos components or applications, they require flags and settings from the installed RTT and OCL libraries. A CMake macro which gathers these flags for compiling an Orocos component or application is provided. This is inspired on how ROS components are compiled.

Deliverable

Title

Form

1.7.1

CMake macro

CMake macro file

1.7.2

Unit test that tests this macro

Patch set

1.8 Allow lock-free policies to be configured

Some RTT classes use hard-coded lock-free algorithms, which may be in the way (due to resource restrictions) for some embedded systems. It should be possible to change the policy to not use a lock-free algorithm in that class (cfr the 'strategy' design pattern'). An example is the use of AtomicQueue in the CommandProcessor.

Deliverable

Title

Form

1.8.1

Allow to set/override lock-free algorithm policy

patch

CMake Rework

This page collects all the data and links used to improve the CMake build system, such that you can find quick links inhere instead of scrolling through the forum.

An alternative solution for users of RTT and OCL is installing the Orocos-RTT-target-config.cmake macros, which serve a similar purpose as the pkgconfig .pc files: they accumulate the flags used to build the library. This may be a solution for Windows systems. Also, CMake suggests that .pc files are only 'suggestive' and that still the standard CMake macros must be used to fully capture and store all information of the dependency you're looking at.

Directories and namespace rework

The orocos/src directory reflects the /usr/include/rtt directory structure, I'll post it here from the user's point of view, so what she finds in the include dir:

Abbrevs: (N)BC: (No) Backwards Compatibility guaranteed between 2.x.0 and 2.y.0. Backwards compatibility is always guaranteed between 2.x.y and 2.x.z. In case of NBC, a class might disappear or change, as long as it is not a base class of a BC qualified class.

Directory

Namespace

BC/NBC

Comments

Header File list

rtt/*.hpp

RTT

BC

Public API: maintains BC, a limited set of classes and interfaces. This is the most important list to get right. A header not listed in here goes into one of the subdirectories. Please add/complete/remove.

CORBA transport files. Users include some headers, some not. Should this also have the separation between rtt/corba and rtt/corba/internal ? I would rename the IDL modules to RTT::corbaidl in order to clear out compiler/doxygen confusion. Also note that current 1.x namespace is RTT::Corba.

rtt/property/*.hpp

RTT::property

BC

Formerly 'rtt/marsh'. Marshalling and loading classes for properties.

CPFDemarshaller.hpp CPFDTD.hpp CPFMarshaller.hpp

rtt/dlib/*.hpp

RTT::dlib

BC

As-is static distribution library files. They are actually a form of 'extras'. Maybe they belong in there...

We need this for allowing to install multiple -dev versions (-gnulinux+-xenomai for example) in the same directory.

rtt-target.h <target>

Will go: 'rtt/impl' and 'rtt/boost'.

Open question to be answered: Interfaces like ActivityInterface, PortInterface, RunnableInterface etc. -> Do they go into rtt/, rtt/internal or maybe rtt/interface ?

!!! PLEASE add a LOG MESSAGE when you edit this wiki to motivate your edit !!!

WP2 Data Flow API and Implementation Improvement

Context: Because the current data flow communication primitives in RTT limit the reusability and potential implementations, Sylvan Joyeux proposed a new, but fairly compatible, design. It is intended that this new implementation can almost transparently replace the current code base. Additionally, this package extends the DataFlow transport to support out-of-band real-time communication using Xenomai IPC primitives.

Sylvain's code has initial CORBA support. The plan is to cooperate on the implementation and offer the same or better features as the current CORBA implementation does. Also the DataFlowInterface.idl will be cleaned up to reflect the new semantics.

Deliverable

Title

Form

2.2.1

CORBA enabled data flow between proxies and servers which uses the RTT type system merged on RTT-2.0 branch

Patch set

2.3 Allow Real-Time data port access with CORBA Proxy

A disadvantage of the current data port is that ports connected over CORBA may cause stalls when reading or writing them. The Proxy or Server implementation should, if possible, do the communication in the background and not let the other component's task block.

The current lock-free data connections allocate memory for allowing access by 16 threads, even if only two threads connect. One solution is to let the allocated memory grow with the number of connections, such that no more memory is allocated than necessary.

It is often argued that CORBA is excellent for setting up and configuring services, but not for continuous data transmission. There are for example CORBA standards that only mediate setup interfaces but leave the data communication connections up to the implementation. This task looks at how ROS and other frameworks set up out-of band data flow and how such a client-server architecture can be added to RTT/CORBA.

Deliverable

Title

Form

2.5.1

Report on out of band implementations and similarities to RTT.

Email on Orocos-dev

2.6 Create automatic marshalling of user types

Since the out-of-band communication will require objects to be transformed to a byte stream and back, a marshalling system must be in place. The idea is to let the user specify his data types as IDL structs (or equivalent) and to generate a toolkit from that definition. The toolkit will re-use the generated CORBA marshalling/demarshalling code to provide this service to the out-of-band communication channels.

Deliverable

Title

Form

2.6.1

Marshalling/demarshalling in toolkits

Patch set

2.6.2

Tool to convert data specification into toolkit

Executable

2.7 Create out-of-band data flow communication

The first communication mechanism to support is data flow. This will be demonstrated with a Xenomai RTPIPE implementation (or equivalent) which is setup between a network of components.

In compliance with modern programming art, the unit tests should always test and pass the implementation. Documentation and Examples are provided for the users and complement the unit tests.

Deliverable

Title

Form

2.8.1

Unit tests updated

Patch set

2.8.2

rtt-examples, rtt-exercises updated

Patch set

2.8.3

orocos-corba manual updated

Patch set

2.9 Organize and Port OCL deployment, reporting and taskbrowsing

RTT 2.0 data ports will require a coordinated action from all OCL component maintainers to port and test the components to OCL 2.0 in order to use the new data ports. This work package is only concerned with the upgrading of the Deployment, Reporting and TaskBrowser components.

Deliverable

Title

Form

2.9.1

Deployment, Reporting and TaskBrowser updated

Patch set

WP3 Method / Message / Event Unified API

Context: Commands are too complex for both users and framework/transport implementers. However, current day-to-day use confirms the usability of an asynchronous and thread-safe messaging mechanism. It was proposed to reduce the command API to a message API and unify the synchronous / asynchronous relation between methods and messages with synchronous / asynchronous events. This will lead to simpler implementations, simpler usage scenarios and reduced concepts in the RTT.

The registration and connection API of these primitives also falls under this WP.

In contrast to commands, each message invocation leads to a new message sent to the receiver. This requires heap management from a real-time memory allocator, such as the highly recommended TLSF (Two-Level Segregate Fit) allocator, which must be integrated in the RTT code base. If the RTOS provides, the native RTOS memory allocator is used, such as in Xenomai.

Deliverable

Title

Form

3.1.1

Real-time allocation integrated in RTT-2.0

Patch set

3.2 Message implementation

Unit test and implement the new Message API for use in C++ and scripts. This implies a MessageProcessor (replaces CommandProcessor), a 'messages()' interface and using it in scripting.

Deliverable

Title

Form

3.2.1

Message implementation for C++

Patch set

3.2.2

Message implementation for Scripting

Patch set

3.3 Demote the Command implementation

Commands (as they are now) become second rang because they don't appear in the interface anymore, being replaced by messages. Users may still build Command objects at the client side both in C++ as in scripting. The need for and the plausibility of identical functionality with today's Command objects is yet to be investigated.

Deliverable

Title

Form

3.3.1

Client side C++ Command construction

Patch set

3.3.2

Client side scripting command creation

Patch set

3.4 Unify the C++ Event API with Method/Message semantics

Events today duplicate much of method/command functionality, because they also allow synchronous / asynchronous communication between components. It is the intention to replace much of the implementation with interfaces to methods and messages and let events cause Methods to be called or Messages to be sent. This change will remove the EventProcessor, which will be replaced by the MessageProcessor. This will greatly simplify the event API and semantics for new users. Another change is that allowing calling Events on the component's interface can only be done by means of registering it as a method or message.

Deliverable

Title

Form

3.4.1

Connection of only Method/Message objects to events

Patch set

3.4.2

Adding events as methods or messages to the TaskContext interface.

Patch set

3.5 Allow event delivery policies

Adding a callback to an event puts a burden on the event emitter. The owner of the event must be allowed to impose a policy on the event such that this burden can be bounded. One such policy can be that all callbacks must be executed outside the thread of the owning component. This task is to extend the RTT such that it contains such a policy.

Deliverable

Title

Form

3.5.1

Allow to set the event delivery policy for each component

Patch set

3.6 Allow to specify requires interfaces

Today one can connect data ports automatically because both providing and requiring data is presented in the interface. This is not so for methods, messages or events. This task makes it possible to describe which of these primitives a component requires from a peer such that they can be automatically connected during application deployment. The required primitives are grouped in interfaces, such that they can be connected as a group from provider to requirer.

Deliverable

Title

Form

3.6.1

Mechanism to list the requires interface of a component

Patch set

3.6.2

Feature to connect interfaces in deployment component.

Patch set

3.7 Improve and create Method/Message CORBA API

With the experience of the RTT 1.0 IDL API, the existing API is improved to reduce the danger of memory leaks and allow easier access to Orocos components when using only the CORBA IDL. The idea is to remove the Method and Command interfaces and change the create methods in CommandInterface and MethodInterface to execute functions.

Deliverable

Title

Form

3.7.1

Simplify CORBA API

Patch set

3.8 Port new Event mechanism to CORBA

Since the new Event mechanism will seamlessly integrate with the Method/Message API, a CORBA port, which allows remote components to subscribe to component events must be straightforward to make.

Deliverable

Title

Form

3.8.1

CORBA idl and implementation for using events.

Patch set

3.9 Update documentation, unit tests and Examples

In compliance with modern programming art, the unit tests should always test and pass the implementation. Documentation and Examples are provided for the users and complement the unit tests.

Deliverable

Title

Form

3.9.1

Unit tests updated

Patch set

3.9.2

rtt-examples, rtt-exercises updated

Patch set

3.9.3

Orocos component builders manual updated

Patch set

3.10 Organize and Port OCL deployment, taskbrowsing

The new RTT 2.0 execution API will require a coordinated action from all OCL component maintainers to port and test the components to OCL 2.0 in order to use the new primitives. This work package is only concerned with the upgrading of the Deployment, Reporting and TaskBrowser components.

Deliverable

Title

Form

3.10.1

Deployment, Reporting and TaskBrowser updated

Patch set

WP4 Create Reference Application Architecture

In order to lower the learning curve, people are requesting often complete application examples which demonstrate well known application architectures such as kinematic robot control. This work package fleshes out that example.

Links : (various posts on Orocos mailing lists)

Estimated Work : 5 days for the application architecture with documentation

Tasks:

4.1 Universal Robot Controller (Using KDL, OCL, standard components)

This application has a robot component to represent the robot hardware, a controller for joint space and cartesian space and a path planner. Users can start from this reference application to control their own robotic platform. Both axes and end effector can be controlled in position and velocity mode. A state machine switches between these modes. A GUI is not included in this work package.

Deliverable

Title

Form

4.1.1

Robot Controller example

tar ball

Full distribution support

There are two major changes required in the CORBA IDL interface.

A new interface for attaching callbacks to events in the component

A rewrite of the

DataFlowInterface,

MethodInterface,

CommandInterface / MessageInterface.

The first point will be relatively straight forward, as events attach methods and messages, which will be represented in the CORBA interface as well.

The DataFlowInterface will be adapted to reflect the rework on the new Data flow api. Much will depend on the out-of-band or through-CORBA nature of the data flow.

The MethodInterface should no longer work with 'session' objects, and all calls are related to the main interface, such that a method object can be freed after invocation.

The CommandInterface might be removed, in case it can be 'reconstructed' from lower level primitives. A MessageInterface will replace it which allows to send messages, analogous to the exiting MethodInterface.

The 'ControlTask' interface will remain mostly as is, extended with events() and messages().

Improved Deployment

Improved Reporting

Data flow logs are now sample based, such that you can trace the flow and state of connections.

Method vs Operation

The RTT 1.x Method, Command and Event APIs have been removed and replaced by Method/Operation. Details are at Methods vs Operations

Real-Time Allocation

RTT includes a copy of the TLSF library for supporting places where real-time allocation is beneficial. The RT-Logger infrastructure and the Method/Operation infrastructure take advantage of this. Normal users won't use this feature directly.

A real-time MQueue transport

Data flow between processes is now possible in real-time. The real-time MQueue transport allows to transport data between processes using Posix MQueues as well as in Xenomai.

For each type to be transported using the MQueue transport, a separate transport typekit must be available (this may change in the final 2.0 release).

Simplified API

Creating a component has been greatly simplified and the amount of code to write reduced to the absolute minimum. Documentation of operations or ports is now optional. Attributes and properties can be added by using a plain C++ class variable, the need to specify templates has been removed in some places.

Services

Component interfaces are now defined as services and a component can 'provide' or 'require' a service. These tools can be used to connect methods to operations at run-time without the necessary lookup code. For example:

This page is for helping you understand what's in RTT/OCL 2.0.0-beta2 release and what's not.

See the RTT 2.0.0-beta1 page for the notes of the previous beta, these will not be repeated here.

Caveats

Like in any beta, first the bad things:

Do not use this release on real machines !

There are *no* guarantees for real-time operation yet.

The API is 'pretty' stable, but the type system rework might have influences, especially on RTT 2.0 typekits (aka RTT 1.0 toolkits). This release will certainly not be binary compatible with the final 2.0.0 release.

Do not manually upgrade your code ! Use the rtt2-converter script found on this site first.

Reacting to Operations (former Event) is not yet possible in state machine scripts.

If other tests fail, this may be because of too strict timing checks, but you can report them anyway on the orocos-dev mailing list or rtt-dev website forum.

New Features

See the RTT 2.0.0-beta1 page for the features added in beta1. Most features below relate to the CORBA transport.

Feature compatibility with RTT 1.x

This release is able to build the same type of applications as with RTT 1.x. It may be rough on the edges, but no big chunks of functionality (or unit tests) have been left out.

Updated CORBA IDL

Want to use an Orocos component from another language or computer ? The simplified CORBA IDL gives quick access to all properties, operations and ports.

Transparent remote or inter-process communication

The corba::TaskContextProxy and corba::TaskContextServer allow fully transparant communication between components, providing the same semantics as in-process communication. The full TaskContext C++ api is available in IDL.

Improved memory usage and reduced bandwidth/callbacks

Calling an operation, setting a parameter, all these tasks are done with a single call from client to server. No callbacks from server to client are done as in RTT 1.x. This saves a lot of memory on both client and server side and eliminates virtually all memory leaks related to the CORBA transport.

Adapted OCL components

TaskBrowser and (Corba)Deployment code is fully operational and feature-equivalent to RTT 1.x. One can deploy Orocos components using a CORBA deployer and connect to them using other deployers or taskbrowsers.

RTT and OCL Cleanup

This work package claims all remaining proposed clean-ups for the RTT source code. RTT 2.0 is an ideal mark point for doing such changes. Most of these reorganizations have broad support from the community.

1 Partition in name spaces and hide internal classes in subdirectories. A namespace and directory partitioning will once and for all separate public RTT API from internal headers. This will provide a drastically reduced class count for users, while allowing developers to narrow backwards compatibility to only these classes. This offers also the opportunity to remove classes that are for internal use only but are in fact never used.

2 Improve CMake build system Numerous suggestions have been done on the mailing list for improving portability and building Orocos on non standard platforms.

3 Group user contributed code in rtt-extras and ocl-extras packages. These packages offer variants of implementations found in the RTT and OCL, such as new data type support, specialized activity classes etc. In order not to clutter up the standard RTT and OCL APIs, these contributions are organized in separate packages. Other users are warned that these extras might not be of the same quality as native RTT and OCL classes.

I prefer 3) as it has the basic functionality we need, is license compatible, has a good design, and we've been offered developer access to modify it. I also think modifying a slightly less-well-known framework will be easier than getting some of our mod's in to log4cxx.

NOTE on the ML I was using the logback term logger, but log4cpp calls it a category. I am switching to category from now on!

Modify the getCategory() function in the hierarchy maintainer to return our OCL:: Category instead of log4cpp::category. Alternatively, leave it producing log4cpp::category but contain that within the OCL::Category object (has-a instead of is-a relationship, in OO speak). The alternative is less modification to log4cpp, but worse performance and potentially more wrapping code.

Deployment

I have a working prototype of the OCL deployment for this (without the actual logging though), and it is really ugly. As in Really Ugly! To simplify the format and number of files involved, and reduce duplication, I suggest extending the OCL deployer to better support logging.

The logger component is no more than a container for ports. Why special case this? Simply to make life easier for the deployer and to keep the deployer syntax and semantic model similar to what it currently is. A deployer deploys components - the only real special casing here is the connecting of ports (by the logger code) that aren't mentioned in the deployment file. If you use the existing deployment approach, you have to create a component per category, and mention the port in both the appenders and the category. This is what I currently have, and as I said, it is Really Ugly.

Important points

There will probably need to be a restriction that to maintain real-time, categories are found prior to a component being started (e.g. in configureHook() or startHook() ).

Note that not all OCL::Category objects contain a port. Only those category objects with associated appenders actually have a port. This is how the hierarchy works. If you have category "org.me.myapp.1.2.3" and it has no appenders but your log level is sufficient, then the logging action gets passed up the hierarchy. Say that category "org.me.myapp" has an appender (and that no logging level stops this logging action in the hierarchy in between), then that appender will actually log this event.

Also should create toolkit and transport plugins to deal with the log4cpp::LoggingEvent struct. This will allow for remote appenders, as well as viewing within the taskbrowser.

Port names would perhaps be something like "org.me.myapp.C1" => log_org_me_myapp_C1".

Real-Time Strings ?

It's not so much the string that needs to be real-time, but the stringstream, which converts our data (strings, ints,...) into a string buffer. Conveniently, the boost::iostream library allows with two lines of code to create a real-time string stream:

If user code 'only' uses const& to strings or C-strings, there is no need for rt_string, but for an rt_stringstream. The above code allows to realize that with a statically allocated (and non expandable!) char buffer. Replacing this buffer with a dynamically growing buffer will probably need an rt_string after all.

Unfortunately, the log4cpp::LoggingEvent is passed through RTT buffers, and this has std::string members. So, we need rt_string also, but rt_stringstream will be very useful also.

Warning For anyone using the boost::iostreams like above, either clear the array to 0's first, or ensure you explicitly write the string termination character ('\0'). The out << "..."; statement does not terminate the string otherwise. Also, I did not need the "space ... to avoid stack smashing abort" bit on Snow Leopard with gcc 4.2.1.

Using boost::iostream repeatedly ... you need to reset the stream between each use

Problems/Questions/Issues

If before the Logger is configured (and hence, the buffer ports and appender associations are created), a component logs to a category, the logging event is lost. At that time no appenders exist. It also means that for any component that logs prior to configure time, by default, those logging events are lost. I think that this requires further examination, but would likely involve more change to the OCL deployer.

The logger configure code presumes that all appenders already exist. Is this an issue?

Is the port-category association a shared_ptr<port> style, or does the category simply own the port?

If the logger component has the ports added to it as well as to the category, then you could peruse the ports within the taskbrowser. Is this useful? If this is useful, is it worth making the categories and their levels available somehow for perusal within the taskbrowser?

Redesign of the data flow interface

write ports are now common to all types of connections, and writing is "send and forget"

read ports still specify their type (data or buffer). The management of the connection type is offloaded on the port object (i.e. no more an intermediate ConnectionInterface object)

the ports maintain a list of "connected" ports. It is therefore possible to do some connection management, i.e. one knows who is listening to what.

Here is the mail that led to this implementation:

The problems

the current implementation is not about data connections (getting data flowing from one port to another). It is about managing shared memory places, where different ports read and write. That is quite obvious for the data ports (i.e. there is a shared data sample that anyone can read or write), and is IMO completely meaningless for buffer ports. Buffer ports are really in need of a data flow model (see above a more specific critic about multi-output buffers)

Per se, this does not seem a problem. Data is getting transmitted from one port to the other, isn't it ?

Well, actually it is a problem because it forbids a clean connection management implementation. Why ? Because there is no way to know who is reading and who is writing ... Thus, the completely useless disconnect() call. Why useless ? Because if you do: (this is pseudo-code of course)

connect(source, dest)
source.disconnect()

Then dest.isConnected() returns true, even though dest will not get any data from anywhere (there is no writer anymore on that connection).

This is more general, as it is for instance very difficult to implement proper connection management in the CORBA case.

Because of this connection management issue, it is very difficult to implement a "push" model. It leads to huge problems with the CORBA transport when wireless is bad, because each pop or get needs a few calls.

It makes the whole implementation a huge mess. There is at least twice the number of classes normally needed to implement a connection model *and* code is not reused (DataPort is actually *not* a subclass of both ReadDataPort and WriteDataPort, same for buffers).

We already had a long thread about multiple-output buffered connections. I'll summarize what for me was the most important points:

the current implementation allows to distribute workload seamlessly between different task contexts.

it does not allow to send the same set of samples to different task contexts. Ther is a hack allowing to read buffer connections as if they were data connections, but it is a hack given that the reader cannot know if it is really reading a sample or reading a default value because the buffer is empty.

IMO the first case is actually rare in robotic control (and you can implement a generic workload-sharing component with nicer features like for instance keeping the ordering between input and output) like in the following example:

The second case is much more common. For instance, in my robot, I want to have a safety component that monitors a laser scanner (near-obstacle detection for the purpose of safety) and the same laser scans to go to a SLAM algorithm. I cannot do that for now, because I need a buffered connection to the SLAM algorithm. I cannot use the aforementionned hack either because for now I plan to put a network connection between the scanner driver and the two targets, and therefore I cannot really guarantee which component will get what.

Proposal

What I'm proposing is getting back to a good'ol data flow model, namely:

making write ports "send and forget". If the port fails to write, then it is the problem of the reader ! I really don't see what the writer can do about it anyway, given that it does not know what the data will be used for (principle of component separation). The reader can still detect that its input buffer is full and that it did not get some samples and do something about it.

making write ports "connection-type less". I.e. no WRITE data ports and WRITE buffer ports anymore, only write ports. This will allow to connect a write port to a read port with any kind of connections. Actually, I don't see a use case where the port designer can actually decide what kind of connection is best for its OUTPUT ports. Some examples:

in the laser scanner example above, the safety component would like a data port and the slam a buffer port

in position filtering, some components just want the latest positions and other components all the position stream (for interpolation purposes for instance)

in general, GUI vs. X. GUIs want most of the time the latest values.

... I'm sure I can come up with other examples if you want them

locating the sample on the read ports (i.e. no ConnectionInterface and subclasses anymore). The bad: one copy of each sample per read port. The good: you implement the point above (write ports do not have a connection type), and you fix buffer connections once and for all.

removing (or deprecating) read/write ports. They really have no place in a data flow model.

Simplified, more robust default activities

From RTT 1.8 on, an Orocos component is created with a default 'SequentialActivity', which uses ('piggy-backs on') the calling thread to execute its asynchronous functions. It has been argued that this is not a safe default, because a component with a faulty asynchronous function can terminate the thread of a calling component, in case the 'caller' emits an asynchronous event (this is quite technical, you need to be on orocos-dev for a while to understand this).

Furthermore, in case you do want to assign a thread, you need to select a 'PeriodicActivity' or 'NonPeriodicActivity', which have their quirks as well. For example, PeriodicActivity serialises activities with equal period and periodicity, and NonPeriodicActivity says what it isn't instead of what it is.

The idea is to create a new activity type which allocates one thread, and which can be periodic or non-periodic. The other activity types remain (and/or are renamed) for specialist users that know what they want.

Streamlined Execution Flow API

It started with an idea on FOSDEM. It went on as a long mail (click link for full text and discussion) on the Orocos-dev mailing list.

Here's the summary:

RTT interoperates badly with other software, for example, any external process needs to go through a convoluted CORBA layer. There are also no tools that could ease the job (except the ctaskbrowser), for example some small shell commands that can query/change a component.

RTT has remaining usability issues. Sylvain already identified the short commings of data/buffer ports and proposed a solution. But any user wrestling with the Should I use an Event (syn/asyn)-Method-Command-DataPort'?' question only got the answer: Well, we got Events(syn/asyn)-Methods-Commands and DataPorts !'. It's not coherent. There are other frameworks doing a better job. We can do a far better job.

RTT has issues with its current distribution implementation: programs can be constructed as such that they cause mem leaks at the remote side, Events never got into the CORBA interface (there is a reason for that), and our data ports over CORBA are equally weak as the C++ implementation.

And then there are also the untaken opportunities to reduce RTT & component code size drastically and remove complex features.

The pages below analyse and propose new solutions. The pages are in chronological order, so later pages represent more recent views.

First analysis

I've seen people using the RTT for inter-thread communication in two major ways: or implement a function as a Method, or as a Command. Where the command was the thread-safe way to change the state of a component. The adventurous used Events as well, but I can't say they're a huge success (we got like only one 'thank you' email in its whole existence...). But anyway, Commands are complex for newbies, Events (syn/asyn) aren't better. So for all these people, here it comes: the RTT::Message object. Remember, Methods allow a peer component to _call_ a function foo(args) of the component interface. Messages will have the meaning of _sending_ another component a message to execute a function foo(args). Contrary to Methods, Messages are 'send and forget', they return void. The only guarantee you got is, that if the receiver was active, it processed it. For now, forget that Commands exist. We have two inter- component messaging primitives now: Messages and Methods. And each component declares: You can call these methods and send these messages. They are the 'Level 0' primitives of the RTT. Any transport should support these. Note that conveniently, the transport layer may implement messages with the same primitive as data ports. But we, users, don't care. We still have Data Ports to 'broadcast' our data streams and now we have Messages as well to send directly to component X.

Think about it. The RTT would be already usable if each component only had data ports and a Message/Method interface. Ask the AUTOSAR people, it's very close to what they have (and can live with).

There's one side effect of the Message: we will need a real-time memory allocator to reserve a piece of memory for each message sent, and free it when the message is processed. Welcome TLSF. In case such a thing is not possible wanted by the user, Messages can fall back to using pre-allocated memory, but at the cost of reduced functionality (similar to what Commands can do today). Also, we'll have a MessageProcessor, which replaces and is a slimmed down version of the CommandProcessor today.

So where does this leave Events? Events are of the last primitives I explain in courses because they are so complex. They don't need to be. Today you need to attach a C/C++ function to an event and optionally specify an EventProcessor. Depending on some this-or-thats the function is executed in this-or-the-other thread. Let's forget about that. In essence, an Event is a local thing that others like to know about: Something happened 'here', who wants to know? Events can be changed such that you can say: If event 'e' happens, then call this Method. And you can say: if event 'e' happens, send me this Message. You can subscribe as many callbacks as you want. Because of the lack of this mechanism, the current Event implementation has a huge foot print. There's a lot to win here.

Do you want to allow others to raise the event ? Easy: add it to the Message or Method interface, saying: send me this Message and I'll raise the event, or call this Method and you'll raise it, respectively. But if someone can raise it, is your component's choice. That's what the event interface should look like. It's a Level 1. A transport should do no more than allowing to connect Methods and Messages (which it already supports, Level 1) to Events. No more. Even our CORBA layer could do that.

The implementation of Event can benefit from a rt_malloc as well. Indirectly. Each raised Event which causes Messages to be sent out will use the Message's rt_malloc to store the event data by just sending the Message. In case you don't have/want an rt_malloc, you fall back to what events can roughly do today. But with lots of less code ( Goodbye RTT::ConnectionC, Goodbye RTT::EventProcessor ).

And now comes the climax: Sir Command. How does he fit in the picture? He'll remain in some form, but mainly as a 'Level 2' citizen. He'll be composed of Methods, Messages and Events and will be dressed out to be no more than a wrapper, keeping related classes together or even not that. Replacing a Command with a Message hardly changes anything in the C++ side. For scripts, Commands were damn useful, but we will come up with something satisfactory. I'm sure.

How does all this interface shuffling allows us to get 'towards a sustainable distributed component model'? That's because we're seriously lowering the requirements on the transport layer:

It only needs to implement the Level 0 primitives. How proxies and servers are built depends on the transport. You can do so manually (dlib like) or automatically (CORBA like)

It allows the transport to control memory better, share it between clients and clean it up at about any time.

The data flow changes Sylvain proposes strengthen our data flow model and I'm betting on it that it won't use CORBA as a transport. Who knows.

And we are at the same time lowering the learning curve for new users:

You can easily explain the basic primitives: Properties=>XML, DataPorts=>process data, Methods/Messages=>client/server requests. When they're familiar with these, they can start playing with Events (which build on top of Method/Messages and play a role in DataPorts as well). And finally, if they'll ever need, the Convoluted Command can encompass the most complex scenarios.

You can more easily connect with other middleware or external programs. People with other middleware will see the opportunities for 1-to-1 mappings or even implement it as a transport in the RTT.

(Please feel free to edit/comment etc. This is a community document, not a personal document)

Notes on naming

The word service is used to name the offering of a C/C++ function for others to call. Today in Orocos Components offer services in the form of 'RTT::Method' or 'RTT::Command' objects. Both lead to the execution of a function, but in a different way. Also, despite the title, it is advised to refrain from using the terms synchronous/asynchronous, because they are relative terms and may cause confusion if the context is not clear.

An alternative naming is possible: the offering of a C/C++ function could be named 'operation' and the collection of a given set of operations in an interface could be called a 'service'. This definition would line up better with service oriented architectures like OSGi.

Purpose

This page collects the ideas around the new primitives that will replace/enhance Method and/or Command. Although Method is a clearly understood primitive by users, Command isn't because of its multi-threaded nature. It is too complex to setup/use and can lead to unsafe applications (segfaults) if used incorrectly. To get these primitives better, we re-look at what users want to do and how to map this to RTT primitives.

What users want to do

Users want to control which thread executes which function, and if they want to wait(block) on the result or not. This all in order to meet deadlines in real-time systems. In practice, this boils down to:

When calling services (ie functions) of other components, one may opt to wait until the service returns the result, or not and optionally collect the result later. This is often best decided at the caller side, because both cases will cause different client code for sending/receiving the results

When implementing services in a component, the component may decide that the caller's thread executes the function, or that it will execute the function in it's own thread. Clearly, this can only be decided at the receiver side, because both cases will cause a different implementation of the function to be executed. Especially with respect to thread-safety.

Dissecting the cases

When putting the above in a table, you get:

Calling a service (a function)

Wait? \ Thread?

Caller

Component

Yes

(Method)

(?)

No

X

(Command)

For reference, the current RTT 1.x primitives are shown. There are two remarkable spots: the X and the (?).

The X is a practically impossible situation. It would involve that the client thread does not wait, but its thread still executes the function. This could only be resolved if a 'third' thread executes the service on behalf of the caller. It is unclear at which priority this thread should execute, what it's lifetime and exclusivity is and so on.

The (?) marks a hole in the current RTT API. Users could only implement this behaviour by busy-waiting on the Command's done() function. However, that is disastrous in real-time systems, because of starvation or priority inversion issues that crop up with such techniques.

Another thing you should be aware of that in the current implementation, caller and component must agree on how the service is invoked. If the Component defines a Method, the caller must execute it in its own thread and wait for the result. There's no other way for the caller to deviate from this. In practice, this means that the component's interface dictates how the caller can use its services. This is consistent with how UML defines operations, but other frameworks, like ICE, allow any function part of the interface to be called blocking or non-blocking. Clearly, ICE has some kind of thread-pool behind the scenes that does the dispatching and collects the results on behalf of the caller.

Backwards compatibility - Or how it is now

Orocos users have written many components and the primary idea of RTT 2.0 is to solve issues these components still have due to defects in the current RTT 1.x design. Things that do work satisfactory should keep working without modification of the user's design.

Method

It is very likely that the RTT::Method primitive will remain to exist as it is today. Little problems have been reported and it is easy to understand. The only disadvantage is that it can not be called 'asynchronously'. For example: if a component defines a Method, but the caller does not have the resources to invoke it (due to a deadline), it needs to setup a separate thread to do the call on its behalf. This is error prone. Orocos users often solve this by defining a command and trying to get the result data back somehow (also error prone).

Command

Commands serve multiple purposes in today programming with Orocos.

First, they allow thread-safe execution of a piece of code in a component. Because the component thread executes the function, no locking or synchronization primitives are required.

Second, they allow a caller to dispatch work to another component, in case the caller does not have the time or resources to execute a function.

Third, they allow to track the status of the execution. The caller can poll to see if the function has been queued, executed, what it returned (a boolean) etc.

Fourth, they allow to track the status of the 'effect' of the command, past its execution. This is done by attaching a completion condition, which returns a bool and can indicate if the effect of the command has been completed or not. For example, if the command is to move to a position, the completion condition would return true if the position is reached, while the command function would have only programmed the interpolator to reach that position. Completion conditions are not that much used, and must be polled.

A simpler form of Command will be provided that does not contain the completion condition. It is too seldomly used.

It is to the proposals to show how to emulate the old behavior with the new primitives.

Proposals

Each proposal should try to solve these issues:

The ability to let caller and component choose which execution semantics they want when calling or offering a service (or motivate why a certain choice is limited):

The ability to wait for a service to be completed

The ability to invoke a service and not wait for the result

The ability to specify in the component implementation if a function is executed in the component's thread

The ability to specify in the component implementation if a function is executed in the caller's thread

And regarding easy use and backwards compatibility:

Show how old-time behavior can be emulated with the new proposal

Show which semantics changed

How these primitives will be used in the scripting languages and in C++

And finally:

Define proper names for each behavior.

Proposal 1: Method/Message

This is one of the earliest proposals. It proposes to keep Method as-is, remove Command and replace it with a new primitive: RTT::Message. The Message is a stripped Command. It has no completion condition and is send-and-forget. One can not track the status or retrieve arguments. It also uses a memory manager to allow to invoke the same Message object multiple times with different data.

Emulating a completion condition is done by defining the completion condition as a Method in the component interface and requiring that the sender of the Message checks that Method to evaluate progress. In scripting this becomes:

Users have indicated that they also wanted to be able to specify in C++:

message.wait("hello");// send and block until executed.

It is not clear yet how the wait case can be implemented efficiently.

The user visible object names are:

RTT::Method to add a 'client thread' C/C++ function to the component interface or call one.

RTT::Message to add a 'component thread' C/C++ function to the component interface or call one.

This proposal solves:

A simpler replacement for Command

Acceptable emulation capacities of old user code

The invocation of multiple times the same message object in a row.

This proposal omits:

The choice of caller/component to choose independently

Solving case 'X' (see above)

How message.wait() can be implemented

Other notes:

It has been mentioned that 'Message' is not a good and too confusing name.

Proposal 2: Method/Service

This proposal focuses on separating the definition of a Service (component side) from the calling of a Method (caller side).

The idea is that components only define services, and assign properties to these services. The main properties to toggle are 'executed in my thread or callers thread, or even another thread'. But other properties could be added too. For example: a 'serialized' property which causes the locking of a (recursive!) mutex during the execution of the service. The user of the service can not and does not need to know how these properties are set. He only sees a list of services in the interface.

It is the caller that chooses how to invoke a given service: waiting for the result ('call') or not ('send'). If he doesn't want to wait, he has the option to collect the results later ('collect'). The default is blocking ('call'). Note that this waiting or not is completely independent of how the service was defined by the component, the framework will choose a different 'execution' implementation depending on the combination of the properties of service and caller.

This means that this proposal allows to have all four quadrants of the table above. This proposal does not detail yet how to implement case (X) though, which requires a 3rd thread to do the actual execution of the service (neither component nor caller wish to do execute the C function).

This example shows two use cases for the same 'the_service' functionality. The first case emulates an RTT 1.x method. It is called and the caller waits until the function has been executed. You can not see here which thread effectively executes the call. Maybe it's 'comp's thread, in which case the caller's thread is blocking until it the function is executed. Maybe it's the caller's thread, in which case it is effectively executing the function. The caller doesn't care actually. The only thing that has effect is that it takes a certain amount of time to complete the call, *and* that if the call returns, the function has been effectively executed.

The second case is emulating an RTT 1.x command. The send returns immediately and there is no way in knowing when the function has been executed. The only guarantee you have is that the request arrived at the other side and bar crashes and infinite loops, will complete some time in the future.

A third example is shown below where another service is used with a 'send' which returns a result. The service takes two arguments: a string and a double. The double is the answer of the service, but is not yet available when the send is done. So the second argument is just ignored during the send. A handle 'h' is returned which identifies your send request. You can re-use this handle to collect the results. During collection, the first argument is now ignored, and the second argument is filled in with the result of the service. Collection may be blocking or not.

The definition of the service happens at the component's side. The component decides for each service if it is executed in his thread or the callers thread:

// by default creates a service executed by caller, equivalent to defining a RTT 1.x Method
RTT::Service the_service("the_service", &foo_service );// sets the service to be executed by the component's thread, equivalent to Command
the_service.setExecutor(this);//above in one line:
RTT::Service the_service("the_service", &foo_service, this);

The user visible object names are:

RTT::Service to add a C/C++ function to the component interface (replaces use of Method/Command).

RTT::CallMethod or similar to call a service, please discuss a good/better name.

RTT::SendMethod or similar to send (and collect results from) a service, please discuss a good/better name.

This proposal solves:

Allows to specify threading parameters in the component independent of call/send semantics.

Removes user method/command dilemma.

Aligns better with 3rd party frameworks that also offer 'services'.

This proposal omits:

How collection semantics are exactly.

How to resolve a 'send' with a 'service executed in thread of caller' (case X). Should a send indicate which thread must do the send on its behalf ? Is the execution deferred in another point in time in the caller's thread ?

Your Proposal here

...

Provides vs Requires interfaces

Users can express the 'provides' interface of an Orocos Component. However, there is no easy way to express which other components a component requires. The notable exception is data flow ports, which have in-ports (requires) and out-ports (provides). It is however not possible to express this requires interface for the execution flow interface, thus for methods, commands/messages and events. This omission makes the component specification incomplete.

One of the first questions raised is if this must be expressed in C++ or during 'modelling'. That is, UML can express the requires dependency, so why should the C++ code also contain it in the form of code ? It should only contain it if you can't generate code from your UML model. Since this is not yet available for Orocos components, there is no other choice than expressing it in C++.

A requires interface specification should be optional and only be present for:

automatically connecting component 'execution' interfaces, such that the manual lookup work which you need to write today can be omitted.

We apply this in code examples to various proposed primitives in the pages below.

New Command API

Commands are no longer a part of the TaskContext API. They are helper classes which replicate the old RTT 1.0 behaviour. In order to setup commands more easily, it is allowed to register them as a 'requires()' interface.

New Event API

The idea of the new Event API is that: 1. only the owner of the event can emit the event (unless the event is also added as a Method or Message) 2. Only methods or message objects can subscribe to events.

New Message API

This use case shows how one can use messages in the new API. The unchanged method is added for comparison. Note that I have also added the provides() and requires() mechanism such that the RTT 1.0 construction:

method = this->getPeer("PeerX")->getMethod<int(double)>("Method");

is no longer required. The connection is made similar as data flow ports are connected.

New Method, Operation, Service API

This page shows some use cases on how to use the newly proposed services classes in RTT 2.0.

WARNING: This page assumes the reader has familiarity with the current RTT 1.x API.

First, we introduce the new classes that would be added to the RTT:

#include <rtt/TaskContext.hpp>#include <string>using RTT::TaskContext;using std::string;/**************************************
* PART I: New Orocos Classes
*//**
* An operation is a function a component offers to do.
*/template<class T>class Operation {};/**
* A Service collects a number of operations.
*/class ServiceProvider {public:
ServiceProvider(string name, TaskContext* owner);};/**
* Is the invocation of an Operation.
* Methods can be executed blocking or non blocking,
* in the latter case the caller can retrieve the results
* later on.
*/template<class T>class Method {};/**
* A ServiceRequester collects a number of methods
*/class ServiceRequester {public:
ServiceRequester(string name, TaskContext* owner);bool ready();};

What is important to notice here is the symmetry:

(Operation, ServiceProvider) <-> (Method, ServiceRequester).

The left hand side is offering services, the right hand side is using the services.

First we define that we provide a service. The user starts from his own C++ class with virtual functions. This class is then implemented in a component. A helper class ties the interface to the RTT framework:

Methods vs Operations

RTT 2.0 has unified events, commands and methods in the Operation interface.

Purpose

To allow one component to provide a function and other components, located anywhere, to call it. This is often called 'offering a service'. Orocos component can offer many functions to any number of components.

Component interface

In Orocos, a C or C++ function is managed by the 'RTT::Operation' object. Click below to read the rest of this post.RTT 2.0 has unified events, commands and methods in the Operation interface.

Purpose

To allow one component to provide a function and other components, located anywhere, to call it. This is often called 'offering a service'. Orocos component can offer many functions to any number of components.

Component interface

In Orocos, a C or C++ function is managed by the 'RTT::Operation' object. So the first task is to create such an operation object for each function you want to provide.

The writer of the component has written a function 'getType()' which returns a string that other components may need. In order to add this operation to the Component's interface, you use the TaskContext's addOperation function. This is a short-hand notation for:

// Add the C++ method to the operation interface:
provides()->addOperation("getType", &MyTask::getType, this)
.doc("Read out the name of the system.");

Meaning that we add 'getType()' to the component's main interface (also called 'this' interface). addOperation takes a number of parameters: the first one is always the name, the second one a pointer to the function and the third one is the pointer to the object of that function, in our case, MyTask itself. In case the function is a C function, the third parameter may be omitted.

If you don't want to polute the component's this interface, put the operation in a sub-service:

// Add the C++ method objects to the operation interface:
provides("type_interface")->addOperation("getType", &MyTask::getType, this)
.doc("Read out the name of the system.");

The code above dynamically created a new service object 'type_interface' to which one operation was added: 'getType()'. This is similar to creating an object oriented interface with one function in it.

Calling an Operation in C++

Now another task wants to call this function. There are two ways to do this: from a script or in C++. This section explains how to do it in C++

Your code needs a few things before it can call a component's operation:

It needs to be a peer of instance 'ATask' of MyTask.

It needs to know the signature of the operation it wishes to call: string (void) (this is the function's declaration without the function's name).

It needs to know the name of the operation it wishes to call: "getType"

Combining these three givens, we must create an OperationCaller object that will manage our call to 'getType':

A lot of work for calling a function no ? The advantages you get are these:

ATask may be located on any computer, or in any process.

You didn't need to include the header of ATask, so it's very decoupled.

If ATask disappears, the OperationCaller object will let you know, instead of crashing your program.

The exposed operation is directly available from the scripting interface.

Calling Operations in scripts

In scripts, operations are accessed far more easier. The above C++ part is reduced to:

var string result = "";
set result = ATask.getType();

Tweaking Operation's Execution

In real-time applications, it is important to know which thread will execute which code. By default the caller's thread will execute the operation's function, but you can change this when adding the operation by specifying the ExecutionType:

// Add the C++ method to the operation interface:// Execute function in component's thread:
provides("type_interface")->addOperation("getType", &MyTask::getType, this, OwnThread )
.doc("Read out the name of the system.");

So this causes that when getType() is called, it gets queued for execution in the ATask component, is executed by its ExecutionEngine, and when done, the caller will resume. The caller (ie the OperationCaller object) will not notice this change of execution path. It will wait for the getType function to complete and return the results.

Not blocking when calling operations

In the examples above, the caller always blocked until the operation returns the result. This is not mandatory. A caller can 'send' an operation execution to a component and collect the returned values later. This is done with the 'send' function:

Other variations on the use of SendHandle are possible, for example polling for the result or retrieving more than one result if the arguments are passed by reference. See the Component Builder's Manual for more details.

RTT 2.0 Data Flow Ports

RTT 2.0 has a more powerful, simple and flexible system to exchange data between components.

Renames

Every instance of ReadDataPort and ReadBufferPort must be renamed to 'InputPort' and every instance of WriteDataPort and WriteBufferPort must be renamed to OutputPort. 'DataPort' and 'BufferPort' must be renamed according to their function.

The rtt2-converter tool will do this renaming for you, or at least, make its best guess.

Usage

InputPort and OutputPort have a read() and a write() function respectively:

As you can see, Get() and Pull() are mapped to read(), Set() and Push() to write(). read() returns a FlowStatus object, which can be NoData, OldData, NewData. write() does not return a value (send and forget).

Writing to a not connected port is not an error. Reading from a not connected (or never written to) port returns NoData.

Your component can no longer see if a connection is buffered or not. It doesn't need to know. It can always inspect the return value of read() to see if a new data sample arrived or not. In case multiple data samples are ready to read in a buffer, read() will fetch each sample in order and each time return NewData, until the buffer is empty, in which case it returns the last data sample read with 'OldData'.

If data exchange is buffered or not is now fixed by 'Connection Policies', or 'RTT::ConnPolicy' objects. This allows you to be very flexible on how components are connected, since you only need to specify the policy at deployment time. It is possible to define a default policy for each input port, but it is not recommended to count on a certain default when building serious applications. See the 'RTT::ConnPolicy' API documentation for which policies are available and what the defaults are.

Deployment

The DeploymentComponent has been extended such that it can create new-style connections. You only need to add sections to your XML files, you don't need to change existing ones. The sections to add have the form:

<!-- You can set per data flow connection policies --><structname="SensorValuesConnection"type="ConnPolicy"><!-- Type is 'shared data' or buffered: DATA: 0 , BUFFER: 1 --><simplename="type"type="short"><value>1</value></simple><!-- buffer size is 12 --><simplename="size"type="short"><value>12</value></simple></struct><!-- You can repeat this struct for each connection below ... -->

Where 'SensorValuesConnection' is a connection between data flow ports, like in the traditional 1.x way.

Real-time with Complex data

The data flow implementation tries to pass on your data as real-time as possible. This requires that your operator=() of your data type is hard real-time. In case your operator=() is only real-time if enough storage is allocated on beforehand, you can inform your output port of the amount of storage to pre-allocate. You can do this by using: