Monday, August 14, 2017

It's almost been 2-3 months since I last posted about Clerkbot. This is largely due to my moving to Bangalore and new job is really hectic as a Data Scientist. I had some of the videos remaining to be posted and hence this post. I was facing a lot of issues with setting up a fully functional WiFi operable Clerkbot setup.

One of the reasons for this was the power setup for the whole system. I was running a WiFi router, two motors sucking almost 1 Amps each and not to forget the NVIDIA Jetson TK1 board that was also running the RPLIDAR A2 on a 12 V 2200 LiPo mAH battery. One big mistake I did was not balancing the LiPo battery as unbalanced cells tend to create a P.D among themselves and what you get as an output is not sufficient amperage to power the whole setup. I guess this is the best possible explanation for the weak power output I was getting, but then I can be wrong. I was using an externally powered USB 2.0 extension adapter(via Powerbank) on the NVIDIA to power the RPLIDAR but with no help. This was causing a weird delay on the messages sent from base Ubuntu ROS station to the bot. Again, I tried using the 3.0 USB adapter, but was underpowered from the Jetson TK1 board. The delay went away but then 500 mA is not enough to power a 3.0 USB adapter and hence the bot didn't respond after about a minute. Here's a look at the setup:

Wednesday, July 12, 2017

During the month of May I had been using a wired setup for the Clerkbot and it was becoming increasingly difficult to get the map of classrooms in Nirma. The penultimate goal of the setup is to have a fully autonomous robot. The previous thought when setting up the robot, was that the NVIDIA board is powerful enough for the ROS setup. But it is not and it would have been better if we would have got hands on with the Jetson TX1. It is more powerful and is also used by the MIT autonomous racing team also.

The Jetson TK1 is just and just powerful enough to run a ROS based setup albeit by degrading the quality of the autonomous setup. I had to make the planner frequency and the resolution of costmaps to the lowest values possible. If anyone is trying to make a ROS based robot, I would not recommend the Jetson TK1. Or it may just be possible that being an amateur C++ programmer, I made a computationally expensive node for the odometry publisher and the tf publisher. This makes the system very unstable. Nonetheless, I did manage to get a distributed setup on the Clerkbot. In a distributed system, the onboard computer has the navigation setup - GMapping Node for the map creation, Odometry publisher and the RPlidar node. The base computer contains the brains of the setup - The navigation planner and localization data will be forwarded to the Jetson TK1 from WiFi and therefore the overall setup gets 'Distributed' between the Jetson TK1 and base computer. I put in a WiFi router to provide for the wireless data transmission between the two setups.
Here's a youtube video of the setup:

Sunday, April 16, 2017

Autonomous navigation heavily relies on probabilities. The hard part isn't getting to build ROS based robots. But actually understand what are the underlying parameters and algorithms. One such book that is literally a bible to the approach of these algorithms is the book by S. Thrun, W. Burgard, and D. Fox - Probabilistic Robotics.

I first thought of directly publishing the odometry of Clerkbot using the ROS nodes. Not knowing the underlying models and states isn't very good for anyone interested in robotics. Here's my take on the Velocity based model based on the book suggested which is also the model which the base_local_planner works. This is because inherently navigation planning uses velocity commands to plan for obstacle avoidance. The odometer readings are only helpful after the command of control has been given.

There are two models into the probabilistic kinematics:

Odometric Model

Velocity Model

I would like to overemphasise one thing. That is in probabilistic robotics, as the name suggests, nothing is certain. You are constantly estimating states based on given inputs and due to noises present, the uncertainty is shown with the help of probability distributions. So nothing is certain when you are dealing with noises present in almost all the states. The odomteric model can be realised with just one probability distribution,

This is when you want to estimate the state the pose of the robot at current time t=t, given the the control commands given at time t=t from the previous pose state at time t=t-1.

You can cross-verify here what I wanted to tell you in the previous para. That given a pose state of the robot and a control command given, you cannot be certain of the position of robot at time t=t+1 because of the noise in motors or odometers or whatever actuator you choose. You cannot be certain about your position, but you can devise a normal distribution of the position of the robot given you have the noise information.

Note: Odometry model is more accurate since you are getting values of the revolutions from encoders present. But for motion planning, obstacle avoidance velocity model wins the race.

Velocity model can be broken down to the following points:

The translational and rotational velocities given at an instant is v,w (read as mu and omega). Now, the radius r covered is, v/w.

Given the initial point of (x,y, theta), the final pose can be estimated by using the equation assuming error less state system.(Figures 2 and 3).

Final equations given the initial pose and after assuming constant angular and translational velocity and delta time t. (Figure 4)

fig 2 : Rotational and translational motion

fig 3: Center of circle equations

fig 4: The final pose estimate(error less system)

Now this is where the concept of probabilistic kinematics and noise kicks in. Consider there are noises present in the rotational and translational velocity with a noise which has zero mean and a variance b. Hence now the final velocities are, containing real world noises.

Sunday, April 9, 2017

Odometry is an important aspect in autonomous navigation, but alas not an uber-essential one. There is Hector Slam algorithm that does not require odometry, but this bot does use it.

Odometry is important since you need to tell the ROS main channel as to where you are with respect to the environment and how much you have moved from the initial point. There's another addition, you can get the angle with respect to the initial point too.

Again, the map's origin is different, the odometry's origin is somewhere else. So are the origin of sensors(Lidar) and the geometric axis of rotation of your robot. So you're continuously sending the state of your robot's position and it is being transformed from local to global axes of the map. Same is the case of other transformations.

Basically you have nodes already available that takes care of your encoder counts. But, I haven't used any of these. Primarily because I wanted to know the intricacies of the working of ROS nodes. There is a tutorial on odometry too but then it is too subjective and not to the point. This being a non- holonomic robot didn't need much of the working from the mechanical aspect.

Here's the odometery tests in accordance with the sanity tests as described here.

Friday, April 7, 2017

There are some intricacies involved when you look at the core assembly level in embedded systems. People who I have seen not take a formal Embedded Course or any formal C course in general, end up coding the less efficient way. This is a generalisation but I have too at some point of time while coding controllers, have been coding the wrong way. The thing is developing logic is one aspect of the story and making the most of the data types, qualifiers and specifiers is also important. This is primarily because while coding embedded systems, we generally have constraints on the flash memory or RAM or the pins available. So the better you code, the better will be the optimisation and the better the controller will perform.

This is an interrupt handler for the counting of encoder states and look at the static variable. The static and const is what I want to focus here. Some key points:

What's actually happening here is that, the variable count needs to compute the counts, leave the function, go to main and come back while keeping track of the counts.

static is used here since I want to essentially keep track of the counts, but then if I use a local variable, it will get destroyed the moment it leaves the function.

Having a static specifier, helps here as it tells the compiler to statically allocate a fixed size for the variable. This helps in retaining the value of variable when scope of the function gets destroyed and refrains the compiler from making copies if the variable.

const is a qualifier and it tells the compiler not to make any changes to the variable.

a global static variable tends to restrict the variable to be under scope of that file only. Meaning no other file can share the same variable.

2) Use of volatile

This is another grey area in embedded. Not many know of it and not many use this, atleast the amateur embedded programmers. volatile is again a qualifier and not a storage specifier like static. Basically the compiler keeps on making optimisation so that the code runs faster and efficiently. When it comes to the compiler, it basically converts the high level language to a machine language and in this is where it tries to optimise the code.

So by putting a volatile qualifier you are basically telling the compiler not to optimise it. Here's a good example

3) Use of pointers

Again a topic largely neglected by intermediate and amateur embedded programmers. Basic logic is needed to get the code running but running into optimisations and increasing the controller efficiency is also needed. I'm not going to divulge into details, but basically pointers point to addresses directly. This is important because most of the time you are writing some driver and you need the address the memory location directly.

Also, the fact that it does not let the program make copies if the variable at run time.

4) Use of debuggers

When you are making large projects, you can't do a trial and error everytime to get the bugs in the code. You ought to have a good debugger to see what happens to your code line by line.

This also ensures if you have peripherals attached to your controller, you can actually check if the data incoming is correct or not. I use a gdb server setup to load and debug using the st-link on STM32.

Sunday, March 5, 2017

Finally it took me three months to fully come up with this robot and just a fun fact, it took me a month to just tune the ocean of parameters. Here I say two months to build the robot, although a good 3 months to 'learn' the ROS framework which includes a Gazebo simulation of a UAV and UGV. Now that it is over, we are all geared up to the challenge of UAV Autonomous navigation.

This whole robot setup is part of a year long research project on UAV and UGV platforms under Dr. D.K. Kothari, HOD, EC Department, Nirma University. The UGV setup is planned to be a precursor to the UAV setup as we wanted to get on 'hands' with ROS framework. Getting our hands directly on a UAV can be a daunting task, and UGV was commissioned to fulfil that exact need. The UGV is working perfectly. But if you want to know as to why exactly it is named so, you'll need to wait.

This is going to be a tutorial cum documentation posts for the Clerkbot. In the coming videos and posts I would be dealing with all the details of the robot. I would also soon establish an open source platform for the same setup so that others can benefit from the same.