Void’s Vault

Knowledge source for efficiency.

AR.Drone 2 With ROS and OpenCV: Get Started Quick With Ubuntu or Mint

In this post, I show how to get started: How to install ROS, how to make it fly, how to use a joystick using ROS and how to use OpenCV with ROS camera feeds. I also explain how to run ROS with Linux Mint.

As some might have noticed, I initially wrote a post showing how to use the tools offered by Parrot to develop some program in C or C++ to make the AR.Drone fly. I suggest you see this post first if you want to know how to use usb joysticks on linux. Fortunately, I was told about ROS a couple of months ago so I did not start to follow this tedious path only to make the drone fly by itself.

I have recently been in a robotics conference in Toronto where I saw a lot of research groups using the AR.Drone. So I teamed up with a strong guy named Sebastien Audet and we began to learn how to use ROS with the AR.Drone. In this article, I will present everything we have found on the web and I will show the code that we implemented.

First of all, If you want to install ROS, you will need a linux OS. Here’s the installation website.

The main supported platform is Ubuntu, so I suggest you use Ubuntu. Since Linux Mint is very close to Ubuntu, and because we prefer Mint over Ubuntu, we used linux Mint. Here’s the installation website.

The installation guide for ROS is pretty much complete, so I suggest you follow it. The only problem for mint is that you need to override some export variables so ROS thinks you’re using Ubuntu. To do that, you first install ROS. Then, you need to type the following bash command:

1
cat /etc/*release*

This will show you the Ubuntu version running underneath your distribution of Linux Mint. For me, it was 12.04 (Ubuntu precise 12.04.2 LTS). With this information, you will append the following at the end of your .bashrc file. Note that I installed ROS fuerte, so you need to change “fuerte” to whatever version you installed.

1
2
3
4
source /opt/ros/fuerte/setup.bash
export ROS_OS_OVERRIDE=ubuntu:12.04
export ROS_WORKSPACE="/home/username/path/to/your/ros/workspace"
export ROS_PACKAGE_PATH=$ROS_PACKAGE_PATH:/home/username/path/to/your/ardrone4ROS/packages

The first line will with make you load ros commands in your terminal. The next one will tell your ROS installation to act like your were using Ubuntu. The third will define where to go when you enter the command roscd. The last one will append to the ROS path list the path of your system when you will have ROS packages to compile, so any ROS project should be in a path in this export variable.

When this is done, you will need ardrone_autonomy and ardrone_tutorials. I suggest you go there to install everything right, but here’s a list of the commands to bash in:

1
2
3
4
5
6
7
8
9
10
11
12
sudo rosdep init
rosdep update
sudo apt-get install ros-fuerte-joystick-drivers
rosdep install joy
sudo apt-get install daemontools libudev-dev libiw-dev
roscd
git clone https://github.com/AutonomyLab/ardrone_autonomy.git
git clone https://github.com/mikehamer/ardrone_tutorials.git
rospack profile
cd ardrone_autonomy
./build_sdk.sh
rosmake -a

Ok so now everything should go all right. If not, google the error message, or contact me, I might have forgotten something.

==UPDATE== Some ROS commands might return errors like this one:

1
joy: No definition of [sensor_msgs] for OS version []

In that case, you have to force the OS override by adding the option to the command, e.g.:

1
rosdep install --os=ubuntu:precise joy

Now if you just want to fly the drone with a keyboard, here the command:

1
roslaunch ardrone_tutorials keyboard_controller.launch

But if you’re here, it’s because you want to use a joystick (or nothing at all, like I want to do) and to use OpenCV instead of Qt in order to work easily with the image stream, right?

It’s time to create a small project in python. In your manifest.xml file, placed at the root of your project, write:

1
2
3
4
5
6
7
8
9
<package>
  <depend package="ardrone_autonomy" />
  <depend package="joy" />
  <depend package="sensor_msgs" />
  <depend package="opencv2" />
  <depend package="cv_bridge" />
  <depend package="rospy" />
  <depend package="std_msgs" />
</package>

Just like in ardrone_tutorials, you should have a launch file. In it, it’s pretty straight forward to set the multiple flight parameters as well as the joystick parameters. Just read the example in ardrone_tutorials and you should make it work quickly. I refer to my previous post if you want to know what are the buttons and axis values from your gamepad.

At that point, you should easily be able to make your gamepad work with the drone by doing these changes to joystick_controller.py like changing the button and axis default values. When this is done, just run the following command:

1
roslaunch ardrone_tutorials keyboard_controller.launch

Now, how do you use OpenCV instead of Qt (yeah, the basic example uses Qt). You need to know that using both at the same time will not work well, because that Qt’s thread seems to conflicts with OpenCV, so OpenCV will crash randomly if inside the execution loop of Qt. Found that somewhere googling around, and it solved our problem perfectly.

Here’s how to convert ROS images to OpenCV images and the other way around. This example is modified from ROS’ website, so go there if you ever need more details.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
import roslib; roslib.load_manifest('autopilot')
import rospy
import sys
import rospy
import cv2.cv as cv
from cv_bridge import CvBridge, CvBridgeError

bridge = CvBridge()

def ToOpenCV(ros_image):
    try:
        cv_image = bridge.imgmsg_to_cv(ros_image, "bgr8")
        return cv_image
    except CvBridgeError, e:
        print e
        raise Exception("Failed to convert to OpenCV image")

def ToRos(cv_image):
    try:
        ros_image = CvBridge().bridge.cv_to_imgmsg(cv_image, desired_encoding="passthrough")
        return ros_image
    except CvBridgeError, e:
        print e
        raise Exception("Failed to convert to ROS image")

And finally, here’s a a quick code to make ROS work without this Qt loop. First you remove the Qt timers and such from drone_video_display.py:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
import roslib; roslib.load_manifest('mymanifestname')
import rospy

# Import the two types of messages we're interested in
from sensor_msgs.msg import Image         # for receiving the video feed
from ardrone_autonomy.msg import Navdata # for receiving navdata feedback

# We need to use resource locking to handle synchronization between GUI thread and ROS topic callbacks
from threading import Lock

# An enumeration of Drone Statuses
from drone_status import DroneStatus

# We need an image converted to use opencv
import cv2.cv as cv
from image_converter import ToOpenCV, ToRos

# Some Constants
CONNECTION_CHECK_PERIOD = 2.250 #seconds
GUI_UPDATE_PERIOD = 0.20 #seconds

class DroneVideoDisplay():
    StatusMessages = {
        DroneStatus.Emergency : 'Emergency',
        DroneStatus.Inited    : 'Initialized',
        DroneStatus.Landed    : 'Landed',
        DroneStatus.Flying    : 'Flying',
        DroneStatus.Hovering  : 'Hovering',
        DroneStatus.Test      : 'Test (?)',
        DroneStatus.TakingOff : 'Taking Off',
        DroneStatus.GotoHover : 'Going to Hover Mode',
        DroneStatus.Landing   : 'Landing',
        DroneStatus.Looping   : 'Looping (?)'
        }
    DisconnectedMessage = 'Disconnected'
    UnknownMessage = 'Unknown Status'

    def __init__(self):
        # Subscribe to the /ardrone/navdata topic, of message type navdata, and call self.ReceiveNavdata when a message is received
        self.subNavdata = rospy.Subscriber('/ardrone/navdata',Navdata,self.ReceiveNavdata)

        # Subscribe to the drone's video feed, calling self.ReceiveImage when a new frame is received
        self.subVideo   = rospy.Subscriber('/ardrone/image_raw',Image,self.ReceiveImage)

        # Holds the image frame received from the drone and later processed by the GUI
        self.image = None
        self.imageLock = Lock()
        cv.NamedWindow("windowimage", cv.CV_WINDOW_AUTOSIZE)

        # Holds the status message to be displayed on the next GUI update
        self.statusMessage = ''

        # Tracks whether we have received data since the last connection check
        # This works because data comes in at 50Hz but we're checking for a connection at 4Hz
        self.communicationSinceTimer = False
        self.connected = False

        # A timer to check whether we're still connected
        self.connectionTimer = rospy.Timer(rospy.Duration(CONNECTION_CHECK_PERIOD),self.ConnectionCallback)

        # A timer to redraw the GUI
        self.redrawTimer = rospy.Timer(rospy.Duration(GUI_UPDATE_PERIOD),self.RedrawCallback)

    # Called every CONNECTION_CHECK_PERIOD ms, if we haven't received anything since the last callback, will assume we are having network troubles and display a message in the status bar
    def ConnectionCallback(self,event):
        self.connected = self.communicationSinceTimer
        self.communicationSinceTimer = False

    def RedrawCallback(self,event):
        if self.image is not None:
            # We have some issues with locking between the display thread and the ros messaging thread due to the size of the image, so we need to lock the resources
            self.imageLock.acquire()
            try:
                image_cv = ToOpenCV(self.image)
            finally:
                self.imageLock.release()

            print "showing image"
            cv.ShowImage("windowimage", image_cv)
            cv.WaitKey(2)
        # Update the status bar to show the current drone status & battery level
#        print (self.statusMessage if self.connected else self.DisconnectedMessage)

    def ReceiveImage(self,data):
        # Indicate that new data has been received (thus we are connected)
        self.communicationSinceTimer = True

        # We have some issues with locking between the GUI update thread and the ROS messaging thread due to the size of the image, so we need to lock the resources
        self.imageLock.acquire()
        try:
            self.image = data # Save the ros image for processing by the display thread
        finally:
            self.imageLock.release()

    def ReceiveNavdata(self,navdata):
        # Indicate that new data has been received (thus we are connected)
        self.communicationSinceTimer = True

        # Update the message to be displayed

And the following gets rid of the main Qt loop in a stupid way, but it will work as a good start point. The loop is in the main of joystick_controller.py :

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
# Setup the application
if __name__=='__main__':
    import sys
    # Firstly we setup a ros node, so that we can communicate with the other packages
    rospy.init_node('ardrone_joystick_controller')

    # Next load in the parameters from the launch-file
    # Blah blah blah
    # Blah blah blah

    # Now we construct our Qt Application and associated controllers and windows
    display = DroneVideoDisplay()
    controller = BasicDroneController()

    # subscribe to the /joy topic and handle messages of type Joy with the function ReceiveJoystickMessage
    subJoystick = rospy.Subscriber('/joy', Joy, ReceiveJoystickMessage)
  
    while 1==1:
        print "while loop"
        time.sleep(5)

That’s it! Now compile your project and launch it using the same bash command as before. ROS will do the rest and you will be able to make your drone fly while streaming images to OpenCV. Isn’t that beautiful? Enjoy!