ROS provides an inter-process communication framework that can be used very easily. However it’s built around TCP/IP and works better in a reliable network. For mobile robots, reliable network often cannot be guaranteed since mobile robots usually need to work far away from the operator and high-speed wired network is impossible in most cases. Lightweight Communications and Marshalling (LCM) was developed by MIT, originally for use in the DARPA autonomous vehicle competition. It uses UDP protocol and optimized for systems that high-bandwidth and low latency are critical. Thus it’s chosen as the default IPC framework.

This tech note provides a list of available C/C++ logging frameworks and a brief comparison among them. The final decision I made was to start with g3log. It’s open-source, lightweight, asynchronous and in active maintenance.

After you get a log using the logger, you may need a tool to help you analyze it, especially when you’re logging sensor data/system states of the robot to help you do the control. It doesn’t give you much useful information by just looking at a full screen of numbers in a log file.

A easy and quick way is to write a Matlab/Octave program. The basic steps may include reading data from the file, doing extra calculation with the data and plot data in 2D/3D. In some cases, say you’re trying to tune a controller with a simulator, you may also want your log analyzer to pick up the latest log automatically so that you can do the simulation/logging/analysis process seamlessly.

You can also use C/C++ to write your own log analyzer. GNU Plot is a light-weight library for data plotting.

Essentially a robot simulation process is to calculate the system state using it’s mathematic model with given initial conditions and inputs. Generally we use ordinary differential equations (ODEs) to describe our robots. By solving for the system states from the set of ODEs at discrete time (0, 1*ts, 2*ts, ...) , we can get the simulated behavior of the robot during the simulated period.

Similarly a physics engine can calculate the state of a system using some kind of numerical methods. (Read about ODE/Bullet for more information) We can set a step size (tphy_s) for the physics engine so that we can get the calculated system state at time tphy_s, tphy_s * 2, tphy_s * 3 ... consecutively when we call the physics engine to do the calculation. If we invoke the calculation at fphy Hz with the step size of tphy_s, it means we will have the system state simulated from time 0 to fphy * tphy_s in 1 second. For example fphy = 1k Hz, and tphy_s = 1ms, after the last calculation iteration we will get the system state at t = 1k * 1ms = 1s. Since the wall time of the real world elapses for 1s and the simulated system state also advances for 1s, we can say that the real-time factor is 1.

In an extreme condition, if we call the physics engine to calculate very fast and we use a big step size, then by the end of the 1s simulation period, we will get the calculated system state at time tsim_end > 1s. This means time in the simulator elapses faster than time in the real world, thus real-time factor is greater than 1.

Of course, limited by the computational power of the computer, the update rate of the physics engine cannot be infinitely high. If a system is extremely complex and the the physics engine cannot get a solution within 1s, then even if we set the step size to be 1s (which is pretty large when simulating a process), we still cannot get a real time factor to be equal or greater than 1.

As to the simulation step, it involves sensing and rendering. The physics engine can run at a much higher rate while we don’t necessarily update the result to the user at each step. Hence we may have the physics engine run at 1kHz in the background and only refresh the simulation scene for the user at 50Hz.

V-REP (virtual robot experimentation platform) is a dual-licensed software. It provides free educational license for the pro-edu version and source code with GNU GPL license. Compared to Gazebo, V-REP is self-contained (meaning less dependencies), cross-platform (linux/windows/mac) and feature rich (more robots/sensors usable out-of-box). Moreover it’s more user-friendly and requires less time to set up your customized robot.

Before starting working with V-REP, I would suggest you to “play with” it for a while. Try out different mobile/non-mobile robots and explore what’s available from the “Model Browser”. Watch this video. But at this point don’t try to grasp every detail yet. Just get yourself familiar with the UI and learn how to change the position/orientation of the robot/component inside the simulation scene.

There are mainly two parts that you need to take care of for setting up a simulator of your robot. The first part is about the robot. What you need to prepare include the dynamic model (not the mathematic one, but a simplified geometrical model for the physics engine) and the robot mechanical model (a more detailed one compared to the dynamic model for visualization). If you want your robot look better in the simulator, you may also want to prepare texture files, extra 3D components (for example better-looking wheels for your mobile robot).

The second part is the simulated environment. In V-REP, a simulated environment is called a scene. A scene may include robots, sensors and other elements like furniture, office items. Each one of them is called an object. V-REP makes heavy use of Lua scripts. Almost all objects in the simulation scene are attached with one or more scripts, which are coordinated by the main script which is attached to the scene. The main script controls the “actuation”-“sensing”-“display” process of each object in a simulation step repeatedly to get the simulation running. It also invokes the physics engine to advance in time. Refer to this article for more details. As mentioned before, each object is attached with a script. V-REP supports two types of scripts: threaded and non-threaded. In most cases a non-threaded script is used and this type of scripts follows the “actuation”-“sensing”-“display” steps, controlled by the main script. If you want to have more control in addition to the these controlled steps and do something in parallel with the main simulation process, you can do that in a threaded script. Read this article about child scripts.

After getting these two ready, the next thing you need to figure out is how to interface with the simulation scene, such as acquiring sensor data and sending motor commands. Once your robot is simulated in a scene and you can also talk with your robot, you can play with it for whatever purposes, like implementing your dynamic control algothrim or testing the mapping and localization function of your robot, just like you do with your real robot.

At this point, you should have got a rough idea about how V-REP works. For most of the components provided by V-REP, if they don’t behave as you desire it to be. Probably you will just need to tweak the attached script. For example, if you want to change the behavior of the propeller module, you just need to open the script, locate to the actuation part and adjust the force/torque formula it use.

If your robot is very simple (say it only consists of a few primitive shapes - cube, cylinder, sphere ...), you can start creating your robot right in V-REP. And you can use these shapes for both visualization and dynamics simulation. But if your robot is more complicated, you might want to import your mechanical models (.stl/.obj/.dxf files) to V-REP. In this case, the visualization part should be straightforward since your simulated robot is created from the design file of the real robot. However for the physics engine, these (usually non-convex) shapes can be computational in-efficient and potentially unstable. So it’s preferred that you can create a simplified geometrical model for the dynamics calculation. Refer to this article for more details.

In V-REP, you can use put visualization models and dynamics models in different layers so that you can just inspect the part that you’re interested in. In V-REP, you need to decide two properties for all parts of your robot: static/non-static, respondable/non-respondable. The basic idea is that non-static (dynamic) parts are considered in the calculation of the physics engine and respondable parts are considered in the collision checking, sensor detection etc. The article ” Designing dynamic simulations” describes this in details. In general, you should set your dynamic models to be non-static and respondable, while for the more detailed visualization models they should be static and non-respondable. In addition, you should also set parameters such as mass/inertia properly to the dynamics models. This may require you to know the mathematic model of your robot.

Once your robot is set up, you can now add more sensors to it. And according to your application, you can even set up different environments such as an office room or a dessert area. Again you may need to tweak the script of the object you add to the scene to make it behave like what you want.

Now you’ve got a robot simulated and running in V-REP. What’s next? You probably want to read sensor data from the simulated robot and do control over it. That’s the main point you want to set up a simulator for it.

V-REP provides a few methods to do so. Basically there are two types of interfacing methods: internal communication and inter-process communication. Refer to this * tutorial to know more about the advantages and disadvantages of each method. If you want to use similar code to control both the simulated robot and the real robot, the remote API may be the best one to use. The work flow is that you first start a server when you start the simulation, then call the remote API functions in your normal control code to read sensor data and send motor commands. Your code acts as a client respect to the server. Watch this nice video for an example and read this article to know more about the remote API. At this point, you should be able to have a much better understanding of these articles: article1, article2.

Another point that’s worthy of mentioning is that there are different ways of communication between sensors and actuators as discussed in “Means of communication in and around V-REP”. The regular API provides support for all of them, but it’s not the case for the remote API. If a sensor is using tube to publish its data and you want to get the data via the remote API. One easy solution is that you can just modify the script of that sensor and use signals to send out data, which is a way that both regular API and remote API support.

In this case study, you will see how to create the simulation of the AscTec Hummingbird Quadrotor.

Components for the simulation:

Quadrotor main frame

4 propellers

IMU sensor(gyro + accelerometer)

It’s highly recommended to read this official tutorial and follow the steps by yourself before you start creating a simulation of your own robot. This case study assumes you’ve already read the contents of this tutorial and the previous sections of this wiki. The goal is only to show you the work flow of using V-REP with a more specific example, thus a lot of operation details are omitted. If you get any problems in the process, always remember to check the official documentation and the official forum. In most cases you should be able to find the answer. It’s also very common that you check the implementation of a existing robot model to get an idea of how to implement a specific feature on your own robot. The example robot models and scenes provided by V-REP are very good resources for you to learn V-REP. In addition, you can try to post your question on the forum. According to my experience, they respond to questions very fast.

You need to prepare the mechanical model file(.stl, .obj) to create the body of your robot. If you just want to do experiments with a simulated robot and don’t really want to design the mechanical structure by yourself, this website is a good place for you to look for existing 3d models. People share mechanical designs like RC-Cars, Quarotors on this site and you can use them to create the simulation. Also get yourself familiar with the robot library of V-REP. It’s very helpful if you know what components are already available so that you can reuse them conveniently. Actuators and sensors are two types of the most commonly used components from V-REP to build your customized robot.

V-REP provides a very powerful tool to create and modify mechanical model. It’s recommended that you assemble all the parts together in Solidworks(or other software you prefer) first and export the robot model into a single stl file. You can import this stl to V-REP and divide the model into smaller parts later. Comparatively it can be very tedious and difficult if you import all parts (a set of .stl files) into V-REP directly and then try to assemble them to the right place, especially if your robot consists of a lot of parts.

Download the stl file for the hummingbird quadrotor from the AscTec wiki. Import the model into V-REP.

If you want the simulated quadrotor to look better, you can now divide the model into smaller parts and apply textures to different parts.

For this imported stl model, it’s usually non-convex and not proper for dynamic simulation in terms of performance and stability. It’s a good practice to create a simpler dynamic model from the original stl model for the simulation. This simplified model is the one actually used by the physics engine to do calculation. In other words, the stl model is mainly used to help create the dynamic model, for example, you can extract a pure shape with similar geometrical characters from the stl model to get the dynamic model of a specific part. And after this process, the stl model is mainly for visual effects.

In this example, the quadrotor frame is abstracted as a sphere with four cuboid shapes.

Don’t forget the adjust the properties of the dynamic model, such as mass, inertia.

V-REP provides the model of a propeller with particle simulation so you can use it directly on the quadrotor. You can find the model from /components/locomotion in the model browser. Add four of the propeller into the scene and adjust the position of each.

If you want the simulated Hummingbird to be more like the real one in appearance, you can hide the visible parts of the provided propeller model and use a better-looking propeller model. For example, you can download this propeller model from GrabCAD site.

V-REP also provides models for the sensors. Simply drag a gyro and a accelerometer into the scene and place them at a proper position on the quadrotor. The final quadrotor model is shown in the image below. You can also check the hierarchy structure of the simulated robot in this screenshot.

By now you should have got a working simulated Hummingbird qaudrotor in V-REP. You need to write code to communicate with it(read sensor data from it and send motor commands to it). Here is an example of using the remoteAPI.

On the V-REP side, add a threaded-script file to the quadrotor model and add the following scripts in it:

simSetThreadSwitchTiming(2)-- Choose a port that is probably not used (try to always use a similar code):simSetThreadAutomaticSwitch(false)localportNb=simGetIntegerParameter(sim_intparam_server_port_next)localportStart=simGetIntegerParameter(sim_intparam_server_port_start)localportRange=simGetIntegerParameter(sim_intparam_server_port_range)localnewPortNb=portNb+1if(newPortNb>=portStart+portRange)thennewPortNb=portStartendsimSetIntegerParameter(sim_intparam_server_port_next,newPortNb)simSetThreadAutomaticSwitch(true)-- Check what OS we are using:platf=simGetIntegerParameter(sim_intparam_platform)if(platf==0)thenpluginFile='v_repExtRemoteApi.dll'endif(platf==1)thenpluginFile='libv_repExtRemoteApi.dylib'endif(platf==2)thenpluginFile='libv_repExtRemoteApi.so'end-- Check if the required remote Api plugin is there:moduleName=0moduleVersion=0index=0pluginNotFound=truewhilemoduleNamedomoduleName,moduleVersion=simGetModuleName(index)if(moduleName=='RemoteApi')thenpluginNotFound=falseendindex=index+1endif(pluginNotFound)then-- Plugin was not foundsimDisplayDialog('Error',"Remote Api plugin was not found. ('"..pluginFile.."')&&nSimulation will not run properly",sim_dlgstyle_ok,true,nil,{0.8,0,0,0,0,0},{0.5,0,0,1,1,1})else-- Ok, we found the plugin.-- We first start the remote Api server service (this requires the v_repExtRemoteApi plugin):simExtRemoteApiStart(portNb)-- this server function will automatically close again at simulation end-- Add a banner:black={0,0,0,0,0,0,0,0,0,0,0,0}purple={0,0,0,0,0,0,0,0,0,1,0,1}blue={0,0,0,0,0,0,0,0,0,0.6,0.6,1}--simAddBanner("Remote API @ Port: " .. portNb,0,sim_banner_bitmapfont+sim_banner_overlay,nil,simGetObjectAssociatedWithScript(sim_handle_self),black,blue)-- Now we start the client application:result=simLaunchExecutable('quad_sim_river',portNb,0)if(result==-1)then-- The executable could not be launched!simDisplayDialog('Error',"'quad_sim_river' could not be launched. &&nSimulation will not run properly",sim_dlgstyle_ok,true,nil,{0.8,0,0,0,0,0},{0.5,0,0,1,1,1})endend-- This thread ends here. The robot will however still be controlled by-- the client application via the remote Api mechanism!

The above script can start a service when you start the simulation and then you can use remoteAPI functions in your C/C++ program to communicate with the simulation. The code below only shows the general structure to establish a connection between a client and a server.

Refer to the tutorials and example scenes to learn how to use remoteAPI, as well as ROS, Matlab interfaces. The “controlTypeExamples” scene provided by V-REP is extremely useful for you to learn how each of the interfaces work, both on the V-REP side and on your own code side. You can find more information from this page.

A demonstration video can be found here. In this video, direct motor commands were sent to the quadrotor and the quadrotor lift up right after receiving the commands.