Tested with Cram v0.4.0, ROS version: Kinetic, Ubuntu 16.04

Bullet world demonstration

This tutorial demonstrates how the internal belief state of the robot is represented in CRAM, how to update the world, how to perform specialized reasoning in it and how to use lightweight simulation for quick testing of plans.

This tutorial assumes that you have a basic understanding of how the belief state is implemented internally. You can read about this in the documentation entry for geometric reasoning. You can find the code of this tutorial on Github.

Installing the projection project

As the system we will be working with will get quite complex things might not go as expected. If you get into trouble while following this tutorial feel free to consult the Support page or contact the person who last edited this tutorial, you can find the username in the footer.

If you followed the installation instructions for the full CRAM on the Installation page, you should be ready to go.

Environment setup

In order to be able to demonstrate many different features of the belief state representation we will set up the environment of the robot similar to how it is done for the real system: we will need an occupancy map for navigation, an IK solver for manipulation, a semantic map of the environment with the furniture etc.

First, let's set up the environment in our terminal by calling the launch file. It already brings up a roscore from within:

$ roslaunch cram_bullet_world_tutorial world.launch
Let's look into that launchfile.
$ roscd cram_bullet_world_tutorial
$ cat launch/world.launch
The occupancy map is for navigating the robot around furniture. The kitchens URDF represents it as if the kitchen were an actual robot whose state can change. Its joint states indicate specific points in the kitchen, which change if we, for example, open a drawer. Mainly, the joint states are for visualisation of important places of the kitchen in rviz (see the picture in section Objects in the world). The semantic map is used to reason about places in the kitchen using location designators, that we talk about in the chapter Abstract entity descriptions. To check if the kitchen URDF was uploaded to the ROS parameter server we can execute the following:
$ rosparam get /kitchen_description | awk 'NR == 5'

Besides all the kitchen information also the PR2 URDF is loaded:

$ rosparam get /robot_description | awk 'NR == 5' # read the fifth line of the URDF
To simulate simple grasping tasks with the PR2 robot in the bullet-world we need an inverse kinematics solver for the arms. In the launch file those are the pr2_arm_kinematics package nodes.

If you retrieve data from the rosparam get commands, we can get going.

REPL setup

Now, let's load the package in the REPL (start the REPL with $ roslisp_repl):

CL-USER> (ros-load:load-system "cram_bullet_world_tutorial" :cram-bullet-world-tutorial)
CL-USER> (in-package :cram-bullet-world-tutorial)

As we will be talking to the ROS master, let's first start a node:

BTW-TUT> (roslisp:start-ros-node "bullet_world")

First, let's do all the initializations:

BTW-TUT> (cram-occupancy-grid-costmap::init-occupancy-grid-costmap) ; Inits the occupancy map for navigation.
BTW-TUT> (cram-bullet-reasoning-belief-state::ros-time-init) ; Resets the simulated time.
BTW-TUT> (cram-location-costmap::location-costmap-vis-init) ; Inits publishing of markers in RViz.
BTW-TUT> (cram-tf::init-tf) ; Inits the TF listener object.

The functions above initialize and clean up the environment of the REPL. Currently, we call those functions per hand for educational purposes. However, there exists a function that does the initialization automatically and spawns the completely set up visualization window. It will be described later.

The way CRAM is designed is that the belief state (the Bullet world) is only updated through Prolog queries, as we consider the belief state to be “robot knowledge” that we can query and that can get things asserted to it, and we try to manipulate the knowledge only through Prolog and not through Lisp directly. To learn more about using Prolog in CRAM you can quickly go through the Prolog tutorial.

A new Bullet world instance is created when we load the cram-bullet-reasoning ASDF system (which is the dependency of our tutorial package). It is stored in the *current-bullet-world* variable, which we can access through the following Prolog query:

BTW-TUT> (prolog:prolog '(btr:bullet-world ?world))

Bullet world initialization

To be able to see what is going on in the belief state and to ease the debugging process we visualize the current world instance with OpenGL:

BTW-TUT> (prolog:prolog '(and (btr:bullet-world ?world)
                              (btr:debug-window ?world)))
Please note, that the visualization window does not necessarily need to be open when the robot is operating and using its belief state, in fact, it should be closed to save processing power.

The “bullet visualization” window that appears should be empty at the moment. Let's spawn some objects.

The first thing we need is the floor plane:

BTW-TUT> (prolog:prolog '(and (btr:bullet-world ?world)
                              (assert (btr:object ?world :static-plane :floor ((0 0 0) (0 0 0 1))
                                                  :normal (0 0 1) :constant 0))))
This assert query must not be confused with the Common Lisp test function assert. To show you the origin of this assert query you can look up its definition. Since it is not a function but a Prolog fact you must search with rgrep in the cram_3d_world/cram_bullet_reasoning directory:
BTW-TUT> Alt-x
rgrep RET
(<- (assert RET
*.lisp RET
/path/to/your/workspace/src/cram_3d_world/cram_bullet_reasoning/src/ RET
There are quite a few definitions here, serving various queries. All of them have in common, that they change something in the world. The one we are reaching by the previous call is this one, located in world-facts.lisp:
(<- (assert (object ?world ?object-type ?name ?pose . ?args))
  (assert ?world (object ?object-type ?name ?pose . ?args)))

In the assert query, the :static-plane is the type of the object we're spawning (it can be :static-plane, :sphere, :box, :mesh, :urdf etc.), :floor is the name of the object, ( (0 0 0) (0 0 0 1) ) is the coordinate.

Via the launch file at the beginning of this tutorial the robot URDF representation has been uploaded to the ROS parameter server under the name robot_description. Let's load it into the world:

BTW-TUT> (let ((robot-urdf
                    (roslisp:get-param "robot_description"))))
              `(and (btr:bullet-world ?world)
                    (cram-robot-interfaces:robot ?robot)
                    (assert (btr:object ?world :urdf ?robot ((0 0 0) (0 0 0 1)) :urdf ,robot-urdf))
                    (cram-robot-interfaces:robot-arms-parking-joint-states ?robot ?joint-states)
                    (assert (btr:joint-state ?world ?robot ?joint-states))
                    (assert (btr:joint-state ?world ?robot (("torso_lift_joint" 0.15d0)))))))

The line (roslisp:get-param “robot_description”) gets the URDF description of the robot as a string. (cl-urdf:parse-urdf …) then parses this string into a cl-urdf:robot object. (cram-robot-interfaces:robot ?robot) predicate reads the name of the robot that was specified in a robot description Lisp package. For the PR2 it is the cram_pr2_description package, which defines the robot name as pr2, so ?robot gets bound to pr2. This line:

(assert (btr:object ?world :urdf ?robot ((0 0 0) (0 0 0 1)) :urdf ,robot-urdf))
spawns the robot. Keep in mind, that spawning a robot in the world creates a new object. The first :urdf defines the type of the created object, while the second is used as a keyword to provide the objects urdf. Then we read out the hard-coded values for robot arm parking states, where the arms are not in the way, and assert those joint states to our PR2 object of the Bullet world.

Finally, let's spawn the kitchen environment (it will take some time as we query KnowRob for the semantic environment map):

BTW-TUT> (let ((kitchen-urdf 
                  (roslisp:get-param "kitchen_description"))))
              `(and (btr:bullet-world ?world)
                    (assert (btr:object ?world :urdf :kitchen ((0 0 0) (0 0 0 1))
                                        :urdf ,kitchen-urdf
                                        :collision-group :static-filter
                                        :collision-mask (:default-filter :character-filter)
                                        :compound T)))))
In the visualization window you can now see the kitchen loaded from the URDF with the semantic map information attached to it. If you want to have only a visualization of the semantic map without using the URDF, you just need to skip passing the URDF parameter: (assert (btr:object ?world :semantic-map my-no-urdf-kitchen ( (0 0 0) (0 0 0 1))), but as having two representations of the semantic map at the same time in the world can lead to undefined behavior and we would have to clean the world and build it up again, let's just stick to the semantic map with coupled URDF.

You can see the semantic map in rviz as well. Open up a terminal and type “rosrun rviz rviz” or just “rviz”. Tweak your setting as shown in the image below.

If there are parameters missing (left box) you can add them using the 'Add' button below. Take the robot model of the kitchen for example:

To see the URDF of the kitchen in RViz you need its description name 'kitchen_description' and the TF prefix 'iai_kitchen'. In the launchfile 'world.launch' you can see, that the name of the kitchen URDF is remapped from 'robot_description' (the default) to 'kitchen_description', which is the name you need in your RViz configuration. When you open the robot model parameter, you have two checkboxes for 'Visual Enabled' and 'Collision Enabled'. The first shows a detailed mesh of each component in the kitchen, the latter shows the collision objects, which are by far less detailed. We use collision objects when navigating the PR2, since the calculation of potential collisions would be very expensive when using visual meshes.

When you are ready with the robot model, look at the 'TF' visualization in RViz. Like the robot model it tells you if any poses can't be resolved, e.g. when their parent link is not published in TF. Open up the 'Tree' parameter of 'TF' (left panel of RViz). This is a tree representing all the TF poses with their respective parents and children. If you change the pose of a frame, all of its children will change poses as well.

As you can see, this environment visualizes all the TF poses we can use in our tutorial. Get back to the debug window (Bullet visualization window) to see the actual meshes of the objects. The debug window only shows the collision meshes, not the detailed visuals, since we want to keep the bullet world fast and lightweight and concentrate on collisions not visuals.

The environment is now initialized. For convenience, there exists a function that will execute all of the above in one call. If you call it now it will reinitialize everything, which is also fine:

BTW-TUT> (roslisp-utilities:startup-ros)

This starts a ROS node and executes the ROS initialization functions, which are registered in CRAM through roslisp-utilities:register-ros-init-function. You can see which functions are registered by executing the following:

BTW-TUT> (alexandria:hash-table-keys roslisp-utilities::*ros-init-functions*)
You should find all of the functions that we manually called above and one additional, the (init-projection) function, which initializes the Bullet world with the robot and the kitchen environment.

Manipulating the kitchen

Make sure you have spawned your kitchen from urdf, not the semantic map. Check the section above for spawning the kitchen. Let us inspect the kitchen now:

BTW-TUT> (btr:object btr:*current-bullet-world* :kitchen)

If needed, change the name of ':kitchen' to the name you used when spawning the kitchen. This call should give you a robot-object into the REPL. Inspect it by right-clicking on the urdf-semantic-map-object and choosing Inspect. You can look into all its joints and states, how they are linked and in what state they are (press L to go up in the inspector). Now we are about to change one of those joints, causing the kitchen to open the door of the fridge.

 '(("iai_fridge_door_joint"  0.3d0))
 (btr:object btr:*current-bullet-world* :kitchen))

If you did everything right, the fridge door should now be slightly open. Check the other joints and change them as you wish. Watch their limits though.

Spawn and change objects in the world

From this point on let us use the fast initialization function to start the bullet-world:

BTW-TUT> (roslisp-utilities:startup-ros)

Now let us spawn some household objects:

(prolog:prolog '(and (btr:bullet-world ?world)
                     (assert (btr:object ?world :mesh mug-1 ((0 0 2) (0 0 0 1))
                                         :mass 0.2 :color (1 0 0) :mesh :mug))))
There appears a mug above the robot's head. We can also move objects:

BTW-TUT> (prolog:prolog '(and (btr:bullet-world ?world)
                              (assert (btr:object-pose ?world mug-1 ((-1 1 1) (0 0 0 1))))))
Now the mug is above the kitchen island table.

To get the instance of the spawned Bullet object there is a function btr:object:

BTW-TUT> (btr:object btr:*current-bullet-world* 'mug-1)
We can get the pose of that object through the function btr:pose:
BTW-TUT> (btr:pose (btr:object btr:*current-bullet-world* 'mug-1))
   #<3D-VECTOR (-1.0d0 1.0d0 1.0d0)>
   #<QUATERNION (0.0d0 0.0d0 0.0d0 1.0d0)>>

Geometric reasoning

Our mug is floating in the air. The belief state world in CRAM is static, in order to apply the physics to it we use the Bullet physics engine to simulate the world for a certain amount of time.

BTW-TUT> (prolog:prolog '(and (btr:bullet-world ?world)
                              (btr:simulate ?world 10)))
This simulates the world for 1 second and the mug falls onto the table.

We can also ask if the scene is stable or, if we simulate the world for longer time, would objects change positions:

BTW-TUT> (prolog:prolog '(and (btr:bullet-world ?world)
                              (btr:stable ?world)))
(((?WORLD . ...)))
This copies our current world and simulates it for a certain amount of time and compares the poses of objects before and after simulation. If the result we got is (NIL) or anything else except NIL then the mug is stably standing on the table.

Now, let's ask if the mug is currently visible for the robot:

BTW-TUT> (prolog:prolog '(and (btr:bullet-world ?world)
                              (cram-robot-interfaces:robot ?robot)
                              (btr:visible ?world ?robot mug-1)))

Let's move PR2 such that the cup is in its visibility range:

BTW-TUT> (prolog:prolog '(and (btr:bullet-world ?world)
                              (assert (btr:object-pose ?world cram-pr2-description:pr2
                                                       ((0.5 1 0) (0 0 1 0))))))
and ask the visibility query again: instead of typing, put the cursor where you would type and press Ctrl-Up. This will find previously executed commands.

Abstract entity descriptions

Now let's generate a distribution of locations from which the robot would be able to see the mug. Before we do anything let's define the costmap metadata, which will tell the grid size and other parameters needed to get the distribuion:

(def-fact-group costmap-metadata ()
    (<- (location-costmap:costmap-size 12 12))
    (<- (location-costmap:costmap-origin -6 -6))
    (<- (location-costmap:costmap-resolution 0.04))
    (<- (location-costmap:costmap-padding 0.3))
    (<- (location-costmap:costmap-manipulation-padding 0.4))
    (<- (location-costmap:costmap-in-reach-distance 0.7))
    (<- (location-costmap:costmap-reach-minimal-distance 0.2))
    (<- (location-costmap:visibility-costmap-size 2))
    (<- (location-costmap:orientation-samples 2))
    (<- (location-costmap:orientation-sample-step 0.1)))
Now, we create an abstract location description that we call a designator. The abstract description gets grounded into specific geometric coordinates with the reference function.
BTW-TUT> (let* ((mug-pose (btr:pose (btr:object btr:*current-bullet-world* 'mug-1)))
                (?mug-pose-stamped (cl-transforms-stamped:pose->pose-stamped "map" 0 mug-pose))
                (?mug-location-designator (desig:a location (pose ?mug-pose-stamped)))
                (to-see-designator (desig:a location (visible-for pr2)
                                                     (location ?mug-location-designator))))
             (desig:reference to-see-designator))
   FRAME-ID: "map", STAMP: 0.0
   #<3D-VECTOR (-0.2800002098083496d0 1.6999998092651367d0 0.0d0)>
   #<QUATERNION (0.0d0 0.0d0 -0.9338500185757329d0 0.3576648470371437d0)>>

We create two symbolic descriptions: one is for the pose that we want to perceive, which we parse to a pose-stamped, and the other one is a pose for the robot from where it would be able to see the mug.

We can also completely abstract away from geometric poses (we used the mug-pose variable which was a quantitative value) by describing the mug as an object standing on the table called the “kitchen_island” in the URDF. This will be equivalent to telling the robot: “go stand in a position such that you would be able to perceive a mug standing on the kitchen island table top”. Please note, that we do not explicitly specify which mug we are searching for, we just say that somewhere on the table there should be a mug.

(let* ((?on-counter (desig:a location 
                             (on (desig:an object
                                           (type counter-top)
                                           (urdf-name kitchen-island-surface)))))
       (location-to-see (desig:a location (visible-for pr2)
                                          (location ?on-counter))))
    (desig:reference location-to-see))
   FRAME-ID: "map", STAMP: 0.0
   #<3D-VECTOR (-0.07000017166137695d0 1.8199996948242188d0 0.0d0)>
   #<QUATERNION (0.0d0 0.0d0 -0.9072884056004601d0 0.42050891674609586d0)>>

Here, as we didn't specifically say which pose we are trying to perceive: the robot randomly samples one pose from all the poses on the kitchen island table and generates a distribution to perceive that pose, i.e., it looks in the general direction of the table. If you look at the debug window now you can see an area that looks like a part of a Gaussian bell near the table. We can use that pose to move the robot there.

BTW-TUT> (btr-utils:move-robot *)

Attaching objects to the robot

When manipulating objects in the environment e.g. carrying a cup in the gripper, the simulation needs to be told that such objects should move, when the respective robot part moves. If the PR2 robot grasps an object, it automatically attaches the object to the gripper, more specific, to the link of the rigid body of the gripper. Via prolog it is possible to attach objects to robots parts manually. This comes in handy when cutlery is stored in a drawer and should move, when the drawer opens. Following we will place a fork inside a drawer and attach it to the link of the drawer. This requires the kitchen to be loaded as URDF, not semantic map. See the chapter about how to load the kitchen, if in doubt. If you work on the latest version of this tutorials repository, you should be fine.

Assuming that the cram_bullet_world_tutorial is already loaded, we need to reset the world:

(roslisp-utilities:startup-ros) ;; restart world
(add-objects-to-mesh-list) ;; load fork mesh

The function set-robot-state-from-joints can change the state of joints. A joint can be the hinge of a door, the rails of a drawer, the elbow of an arm of the PR2, basically everything that joins two parts and can be moved in some kind of way. The state of such a joint indicates its current position, e.g. how hard the elbow is bent or how far the drawer is pulled. The function takes two arguments, a list of joints with their desired positions, and the corresponding robot, in this case the kitchen.

Since a closed drawer would disguise the fork, lets open it:

 '(("sink_area_left_upper_drawer_main_joint"  0.5))
 (btr:object btr:*current-bullet-world* :kitchen))

Spawn the fork inside the drawer:

 :pose '((1.0 0.9 0.75) (0 0 0 1)))

If this throws an error, you probably forgot to use (add-objects-to-mesh-list) previously, since we need the fork mesh. The whole scene should now look like this:

Try and move the drawer a bit with set-robot-state-from-joints. You'll see, that the fork stays in place, even if the scene is simulated with (btr:simulate btr:*current-bullet-world* 10). To solve this, the fork needs to be attached to this specific drawer. All parts and links of the kitchen can be inspected with (btr:object btr:*current-bullet-world* :kitchen), but to save some time the following example already contains the correct link.

(prolog '(and (btr:bullet-world ?world)
              (btr:%object ?world fork-1 ?fork)
              (assert (btr:attached ?world :kitchen "sink_area_left_upper_drawer_main" ?fork))))
Notice, that the joint name differs from the link name. Now the fork moves when the drawer is moved. Every attachment can be checked with the following predicate:
(prolog '(and (btr:bullet-world ?world)
              (btr:attached ?world :kitchen ?_ fork-1)))
This checks if there is any attachments between kitchen and fork. If needed, it is possible to set the name of a link to be specifically checked. Or set the blank to ?link, to get the list of links the object is attached to. To detach an object, the retract predicate does the job.
(prolog '(and (btr:bullet-world ?world)
              (assert (btr:retract ?world :kitchen fork-1 ?_))))
In the fourth argument of retract, which is kept blank here, it is possible to specify the link name, the object should be detached from, but this is only needed if the object is attached to multiple links.

Visualizing coordinate frames of poses

In bullet_world it is not possible to view an object in relation to its orientation. However, the knowledge of the orientation is important for many manipulation tasks that the robot has to perform. There is a function called btr:add-vis-axis-object for this. Here you can specify as parameters either a pose or an object name e.g:

(btr:add-vis-axis-object (cl-transforms:make-pose (cl-transforms:make-3d-vector 1 0 1) (cl-transforms:make-identity-rotation)))
;;After spawning an object (like above in the code) with the name mug-1
(btr:add-vis-axis-object 'mug-1)

Executing motions

From now on we will use the utility functions from cram_bullet_reasoning_utilities package, to save time on writing lengthy Prolog queries. The package cram_bullet_reasoning_utilities has a number of utility functions to make the rapid prototyping with the Bullet world faster and easier, there are functions such as spawn-object, move-object, kill-all-objects, move-robot etc. that execute the Prolog queries with default values of parameters that are not important, e.g. default colors for objects. By pressing . while holding Alt (Alt-.) while the Emacs cursor is on the function name you will be redirected to the definition of the function to see what exactly it does. Alt-, brings back the previous window. Try this for init-projection, for example.

We need a clean environment for this tutorial, so let's clean the world:

BTW-TUT> (roslisp-utilities:startup-ros)

Now, let's try to grasp an object. For that we will use a mesh of a bottle loaded from the resources subdirectory of the tutorial.

BTW-TUT> (add-objects-to-mesh-list)
Then we spawn an object of type bottle and move it onto the table.
BTW-TUT> (btr-utils:spawn-object 'bottle-1 :bottle)
BTW-TUT> (btr-utils:move-object 'bottle-1
                                 (cl-transforms:make-3d-vector -2 -0.9 0.83)
Lastly we simulate the world for 10 seconds to make sure, nothing moves unexpectedly on runtime.
BTW-TUT> (btr:simulate btr:*current-bullet-world* 100)
Before we grasp the bottle, let's first prepare the PR2, just three steps: move the arms out of sight, navigate the base in front of the bottle and look at the bottle. The point we want the base to navigate to can be hard coded and saved temporarily.
BTW-TUT> (setf ?grasp-base-pose 
                (cl-transforms:make-3d-vector -1.8547d0 -0.251d0 0.0d0)
                (cl-transforms:axis-angle->quaternion (cl-transforms:make-3d-vector 0 0 1) 
                                                      (/ pi -2))))
The same thing can be done with the point we want to look at.
BTW-TUT> (setf ?grasp-look-pose 
                (cl-transforms:make-3d-vector 0.65335d0 0.076d0 0.758d0)
To execute any plan in CRAM, we need a top-level context. Besides that we also use a macro to specify that the demo should be executed in simulation, not on the real robot. Putting your plan under pr2-proj:with-simulated-robot will indicate that your robot is in the projection environment and it also has a top level call within it. The with-simulated-robot is a way to abstract out the robot details from your plans and its counterpart to execute on a real robot would be pr2-pms:with-real-robot. Also note that without mentioning the robot that you want to execute on, the TF for it is not published and you'll run into errors. Putting all these together we end up with this plan:
    (pr2-proj::drive ?grasp-base-pose))
  (pr2-proj::look-at-pose-stamped ?grasp-look-pose))
We can execute some movements in parallel, if they use different joints of the robot. That's what cpl:par is for. The function park-arms performs a motion on the joints of both arms, which brings them into a specific position, so they don't hang around the field of view. We have used a simple call to low level methods to achieve motions like move to the ?grasp-base-pose and look at ?grasp-look-pose. These can be achieved by corresponding motion designators, which we will look at in later tutorials.

To grasp the bottle we need to have its pose in the room. Therefore, we first perceive it and store the result in *perceived-object*:

(defvar *perceived-object* nil "Object designator returned from perception")
  (setf *perceived-object*
        (pr2-proj::detect (desig:an object (type bottle)))))

With that resulting perceived object we perform the picking up action. With the torso so far down we might not be able to reach for the bottle, so we need to also push the torso up:

    (let ((?perceived-bottle-desig *perceived-object*))
      (pr2-proj::move-torso 0.3)

As there is no atomic motion for picking up an object, in fact, picking up is comprised of multiple move-arm motions, so pick up is implemented within a plan and called by performing an action designator. This is explained in the next tutorial on writing simple mobile manipulation plans.