I’m currently working on an asssignment for which I want to have a Turtlebot3 Waffle Pi + OpenManipulator-X perform AGV and pick and place tasks. The idea is similar to what you see in this video from Robotis: https://www.youtube.com/watch?v=P82pZsqpBg0

The goal is to let the Turtlebot3 move to position X, grab an object (using AR tags for accuracy and consistency), move to position Y, drop the object (again using AR tags) and then move to position Z. The triggers for performing each task will be a signal coming from a server. I want to program these triggers with Python in a state machine.

From the research that I’ve done so far, SMACH seems like a good package to use (in combination with the other relevant and required packages). Reason for this is that SMACH allows you to code states that use ROS packages and it’s written in Python.

Is there anybody that knows a step-by-step tutorial to achieve a the result from the Robotis video? There once was a tutorial for Pick and Place on the e-Manual website, but it has been removed. Since this video is so similar to want I want to achieve, a step-by-step tutorial would be a great basis for my project, on which I can expand with the required custom code.

As we were updating our source code for TurtleBot3 and OpenMANIPUALTOR-X, there had been modification in our contents.
The main reason we abandoned the Smach is because it has not been maintained over a few years and there were bugs.
However, we recently created the same feature under the Manipulation section where the camera searches for the AR markers and deliver items to designated locations.
Please see TurtleBot3 Home Service Challenge for more details.
You can also try simulation example.
Please note that the example is written based on ROS 1 Kinetic.
Thank you.

Oh, this challenge is new. I have stopped using the waffle and Manipulator as all the files moved and I have become, Lost I guess? I am trying to write my own keyboard controller as it is a to-do thing, but sadly I have two of these arms and the files conflict with the one on the waffle and one on my desktop.
I am sure they will fix it when they see how much it is needed.

Yes, this challenge was designed for a new competition that was going to be held in early 2020, but the Covid-19 ruined our event
The good news is that we still have plan to hold this event this year.
As of today, OpenMANIPULATOR-X and TurtleBot3_manipulation libraries are not quite compatible, but we are keep working on developing our code to be more coherent.
Thanks for your interest!

Alright, I’ve performed all of the steps of the home service challenge tutorial, which means that I can now run the demo remote launch file with my own map (created using the SLAM tutorial steps). Now, there are multiple things that I would like to change, but after digging and searching through the code, I’m a bit lost on how to achieve what I want.

With the stock home service challenge, the Turtlebot will search for the first AR tag (using coördinates and the camera, I guess?), navigate to this object, grab this object and move the object from this location to another one (which is also found with coördinates and the camera, I guess?), releasing the object and then move back to the starting position. These steps will then be repeated for the other three objects.

What I want to achieve is the following:

Have the Turtlebot wait at the starting position

If my Python script reads a certain value from a server, the Turtlebot should move to AR tag 0 (where an object is located), fine-tune its position (I believe this is already implemented in the home service challenge) and give some feedback that the Python script can read/receive

Once the position has been fine-tuned, the Turtlebot should wait at the location and give some sort of feedback, so that my Python script can update a value on the server

When, during the waiting, my Python script reads another certain value from the server, the Turtlebot should actually grab the object and move to AR tag 1 (where the object has to be released). Again, as soon as the Turtlebot starts moving to the next position, some feedback should be given so that the Python script can update a value on the server.

Once the Turtlebot has arrived at AR tag 1, it should position itself in front of it (just like with AR tag 0) and release the object. After releasing the object, the Turtlebot should move to the starting position. Here again, as soon as the Turtlebot starts moving to the starting position, I’d like some feedback that the Python script can read/use.

Once arrived at the starting position, I want the Turtlebot to wait again for a signal from my Python script to repeat the steps above. Furthermore, once the Turtlebot arrives at the starting position, I want again a signal that the Python script can read.

Note that with my assignment, I’m only interested in moving one object from point A (AR tag 0) to point B (AR tag 1). With this plan in mind, there are several things I need to do. However, as I mentioned before, I’m a bit lost on how to solve it. So here’s what I need to change in the home service challenge and the issues that I face:

All actions should be seperated, instead of one mission that performs all the tasks consecutively.
I think I can do this by simply splitting the scenario.yaml file into seperate scenario.yaml files, but I’m not sure if this is correct and (if so) how I can then activate the seperate steps (as listed above). My Python script that reads the server values, will need to publish the ROS commands, but they will have to be seperate commands for each step.

I need to update the position of the AR tags in the room.yaml file, but the coördinate system doesn’t make any sense to me, so I have no clue on how to adapt the values to my own map.

I need to change the different position of the OpenMANIPULATOR-X, which I can find in the config.yaml file, but I’m not sure how these positions are linked to the ROS commands that I publish. How does this work

As mentioned in the steps above, I will need to receive feedback from ROS when a certain task starts or has been completed. How do I do this, so that my Python script can read this feedback?

These are the main problems I have. If someone could help me to get started, that would be great! And if Python is not the way to go for the automation method (because that’s pretty much what it does, it reads server values and based on these, it commands the Turtlebot to perform certain tasks), then please let me know what would work too.

Another update: I’ve already created the custom positions of the OpenMANIPULATOR-X. Furthermore, I am able to set these positions by publishing a message through my Python automation script. So that part is done, three more steps to go:

All actions should be seperated, instead of one mission that performs all the tasks consecutively.
I think I can do this by simply splitting the scenario.yaml file into seperate scenario.yaml files, but I’m not sure if this is correct and (if so) how I can then activate the seperate steps (as listed above). My Python script that reads the server values, will need to publish the ROS commands, but they will have to be seperate commands for each step.

I need to update the position of the AR tags in the room.yaml file, but the coördinate system doesn’t make any sense to me, so I have no clue on how to adapt the values to my own map.

I will need some feedback from ROS when the Turtlebot is done positioning itself at either AR tag 0, AR tag 1 or the starting position. This feedback has to be interpreted by the Python automation script. How can I achieve this?

If anyone can give me some tips on how to solve/tackle (any of) these three steps, that would be great