Software Setup

This is not an exhaustive resource for how to do every step in setting up your software, however we cover the majority of important information here.

Overview

Important notes before moving further into testing additional packages

It is not necessary to run the Stereo DNN in order to test the functionality of the TrailNet DNN for path navigation. Below is the rqt_graph that shows the flow of subscribing and publishing of topics between nodes. It is advantageous to validate the ability of the drone to carry out just the navigation package tasks on its own because it is less likely that the TX2 will overheat or processes will crash if both the TrailNet DNN and the Stereo DNN were running at the same time. From our experience, modifications to the frame are necessary in order to install the heatsync and fan (or fans) in order to keep the TX2 below 60 degC. If the board goes past 60 degC it will automatically reduce its clock speed in order to prevent damage, thus artifically reducing the true capabilities of the Jetson. We want to eventually combine several packages and perform even more complex navigation techniques, so we suggest making the frame modifications suggested in the "Hardware Setup" section if you want to test beyond this point.

Fully operational RQT_GRAPH for TrailNet Navigation with YOLO for Object Detection

Jetpack 4.2.2 installation instructions

Although these steps are for Jetpack 4.2.2 the same process can be followed for Jetpack 4.3 just make sure you flash the correct carrier board firmware for that version of Jetpack.

Download the Nvidia SDK Manager

Download the J120 Firmware 2.2 for the TX2

or

Download the Obritty Firmware

Start the SDK Manager

sdkmanager

Select Jetpack 4.2.2 and make sure you have the host machine unchecked.

In this case it will be Jetpack 4.2.2

First we need to populate the ~/nvidia/nvidia_sdk folder.

You need to only check the Jetson OS components for download and installation.

  • Note you cannot choose the "Download now. Install later" option because we need the ~/nvidia/nvidia_sdk files to be populated so we can migrate our firmware into these files for the J120.

  • We will install the SDK components later on.

  • Once the OS image is downloaded and the /nvidia/nvidia_sdk folder is populated we can skip the flash and then navigate to the carrier board firmware directory.

Check the README file in the firmware folder, and verfiy that the correct paths and files are copied over into the /nvidia/nvidia_sdk folder.

For example the J120 would be something like this

cp -r ~/Downloads/J120_4.2/J120_kernel/* /home/<user>/nvidia/nvidia_sdk/JetPack_4.2_Linux_JETSON_TX2/Linux_for_Tegra/
cd /home/<user>/nvidia/nvidia_sdk/JetPack_4.2_Linux_JETSON_TX2/Linux_for_Tegra/
sudo ./apply_binaries.sh

after running the ./apply_binaries.sh you should not get any errors, and then you are good to move to the next step.

Re-Flashing with the J120 firmware patch installed

  • Now you need to turn off the TX2 on the J120 and boot back into recovery mode.

  • connect a micro-usb cable to the J120 and to your computer (preferably the one that came with the TX2 that has the little green controller symbol on it).

  • connect power to the J120 and the green LED should not come on indicating the OS has not booted.

  • then, you need to boot into recovery mode.

    --> hold the REC (Recovery) button and then press the power button (still holding REC)

    --> press the reset button once while STILL holding the REC button and then wait 2 seconds until releasing the REC button.

  • Now to verify the device is connected to the HOST PC you can run 'lsusb' and if NvidiaCorp shows up as a listed USB device you are set.

  • Another way to verify the TX2 has booted in recovery mode is to hook up the HDMI to a display, nothing should appear on the display if properly booted into recovery mode

  • Then follow the sdkmanager steps as we did before. ONLY install the Jetson OS components for download and installation. NOT the sdk components.

  • It should indicate that the OS image is ready with a green check mark because we already populated and flashed before, but now we modified that image and it will re-flash.

OS Boot up and Set-up

  • Once the sdkmanager has completed the flash you should see the TX2 boot up if you are connected to an HDMI.

  • verify that the J120 patch worked by checking to see if both the top and bottom USB ports are working.

  • note that only the upper USB port is USB3, the bottom is USB2.

  • also note that the micro-usb port does not work so you wont see it appear using lsusb anymore, only in forced recovery mode will it appear using lsusb. You cant use this port in normal operation.

  • while we are here we can configure our username/password then once those basics are finished on the setup wizard you can move to the next step.

Installing the SDK components from sdkmanager

  • Now this next step was traditionally done using an ethernet connection but now they have updated it to where the usb to micro-usb connection acts as usb-ethernet.

  • If you remember, our carrier board disabled the micro-usb port and in order to install the SDK components we cannot be in forced recovery mode. That mode is only used for flashing!!

  • So here is where it gets silly

  • Power down the TX2 and remove it from the J120 carrier board.

  • Get out the old development board and install the TX2 onto there and make sure you bring your antenna's with you because we need to connect to the internet.

  • Once you have everything moved over, boot up and login

  • open a terminal and run

sudo apt-get install update && sudo apt-get install upgrade
  • Now connect the micro-usb to your HOST PC and run the sdkmanager.

  • run ifconfig on the TX2 and your host machine to ensure the IP's are 192.168.55.1 and 192.168.55.100 respectively.

  • run lsusb and see if the NvidiaCorp device shows up, it should.

  • ONLY check the sdk components for download and installation.

  • after the download is complete a window will appear asking for your username and password with a defualt IP address that correlates to the serial connection you have over usb w/ the TX2.

  • continue the installation and hopefully since we already updated the packages earlier we wont have any failed packages. (Maintain internet connection during install just to be safe)

Complete

  • run nvcc --version and see if CUDA is there!

  • you can verify any other packages a swell, but that's all. Easy! ;D

Install Jetsonstats

Jetsonstats is a great package that allows you to check all of the critical information and health on the TX2 quickly.

sudo apt install python-pip

sudo -H pip install -U jetson-stats

#Reboot/Logout & Login

sudo jtop

here you can change the power mode to MAX N and start the jetson_clocks service

Alternatively you can run

sudo nvpmodel -m 0
jetson_clocks

Setting up ROS Melodic

sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list'

sudo apt-key adv --keyserver 'hkp://keyserver.ubuntu.com:80' --recv-key C1CF6E31E6BADE8868B172B4F42ED6FBAB17C654

sudo apt update

sudo apt install ros-melodic-desktop-full

echo "source /opt/ros/melodic/setup.bash" >> ~/.bashrc

source ~/.bashrc

sudo apt install python-rosdep python-rosinstall python-rosinstall-generator python-wstool build-essential

sudo rosdep init

rosdep update

#Project Dependencies
sudo apt-get install -y  ros-melodic-mavros ros-melodic-mavros-extras ros-melodic-joy python-catkin-tools tmux ros-melodic-tf2-geometry-msgs

sudo apt-get install -y gstreamer1.0-plugins-bad

sudo apt-get install -y libgstreamer1.0-dev gstreamer1.0-tools libgstreamer-plugins-base1.0-dev libgstreamer-plugins-good1.0-dev libyaml-cpp-dev

sudo apt-get install -y ros-melodic-camera-info-manager ros-melodic-camera-calibration-parsers ros-melodic-image-transport

sudo apt-get install -y ros-melodic-gscam



Now create your catkin workspace environment

mkdir -p ~/catkin_ws/src

cd ~/catkin_ws/

catkin_make
# or
catkin build

echo "source devel/setup.bash" >> ~/.bashrc

echo $ROS_PACKAGE_PATH

If you are completely new to ROS please take the time to read through these tutorials

If you want to really understand how ROS works and why it is so applicable in robotics applications read this book ( at minimum the first 4-5 chapters)

Installing ZED SDK

Download the ZED SDK https://www.stereolabs.com/developers/release/

cd ~/Downloads/
chmod +x ZED_SDK_Tegra_JP42_v3.2.1.run
./ZED_SDK_Tegra_JP42_v3.2.1.run

Its up to you what options you would like to install/not install

Installing Redtail Package

cd ~/catkin_ws/src

git clone https://github.com/mtbsteve/redtail.git

#Clone modified packages
cd ~
git clone https://github.com/akrolic/Redtail_Extended_mod.git
cp -r ~/Redtail_Extended_mod/packages/ ~/catkin_ws/src/redtail/ros/packages

# build  nvstereo_interference library, sample application and tests
cd /usr/src/gtest
cmake CMakeLists.txt
make

cd $HOME/catkin_ws/src/redtail/stereoDNN
mkdir build
cd ./build
cmake -DCMAKE_BUILD_TYPE=Debug ..
make

cd ..
mkdir build_rel
cd ./build_rel/
cmake -DCMAKE_BUILD_TYPE=Release ..
make

#Test its working
./bin/nvstereo_tests_debug ./tests/data

mkdir $CATKIN_WS/src/redtail/ros/packages/stereo_dnn_ros/stereoDNN
ln -s $HOME/catkin_ws/src/redtail/stereoDNN/build $HOME/catkin_ws/src/stereo_dnn_ros/stereoDNN/
ln -s $HOME/catkin_ws/src/redtail/stereoDNN/lib $HOME/catkin_ws/src/stereo_dnn_ros/stereoDNN/
ln -s $HOME/catkin_ws/src/redtail/stereoDNN/sample_app $HOME/catkin_ws/src/stereo_dnn_ros/stereoDNN/

#Sample Application
cd ~/catkin_ws/src/redtail/stereoDNN
./bin/nvstereo_sample_app_debug nvsmall 513 161 ./models/NVTiny/TensorRT/trt_weights.bin ./sample_app/data/img_left.png ./sample_app/data/img_right.png ./bin/disp.bin

cd ~/catkin_ws/
catkin build

Configure ROS Network and TX2 Hotspot

#For reference here is a list of your environment variables
#We will be changing some of them later on
printenv | grep ROS

Navigate to this

cd /etc/modprobe.d/

sudo gedit bcmdhd.conf

Add the following to a new line in the file

options bcmdhd op_mode=2

Although the screenshot below is from ubuntu 16.04, setting up a new wifi connection and setting the mode to hotspot on ubuntu 18.04 is similar.

Once you have made the changes to the bcmdhd.conf configuration file and created a new wifi connection you can restart the TX2. After it boots up you should notice the hotspot has started. You can check to see if you can connect from another local machine.

Configure ROS Network Environment Variables

In this example lets assume that

+Drone's IP: 10.42.0.1

+Host PC IP: 10.42.0.2

On the TX2

export | grep ROS

ifconfig

echo "export ROS_MASTER_URI=http://10.42.0.1:11311" >> ~/.bashrc
echo "export ROS_IP=10.42.0.1" >> ~/.bashrc

On the Host PC (With ROS melodic installed using the same steps provided beforehand)

ifconfig

echo "export ROS_MASTER_URI=http://10.42.0.1:11311" >> ~/.bashrc
echo "export ROS_IP=10.42.0.2" >> ~/.bashrc

Now when you are connected to the TX2's hotspot from the Host PC you will be apart of the ROS network and can publish/subscribe or view information about topics/nodes etc.

Installing Joystick Node on Host PC

Use these commands to install the joy package, start a ROS master and then run the joy node with a controller connected to the Host PC.

sudo apt install ros-melodic-joy

roscore

rosrun joy joy_node _dev:=/dev/input/js0 & rostopic echo /joy

#Debugging
ls /dev/input/js*
#Try other js# incase you have connected multiple joysticks until you find the one

Installing QGroundControl on the Host PC

Download the AppImage here

Following this guide

sudo usermod -a -G dialout $USER
sudo apt-get remove modemmanager -y
sudo apt install gstreamer1.0-plugins-bad gstreamer1.0-libav gstreamer1.0-gl -y

chmod +x ./QGroundControl.AppImage
./QGroundControl.AppImage

Transmitter / Receiver Configuration

We are using the FrSky Taranis QX7 transmitter with the FrSky R-XSR SBUS 2.4GHz Micro receiver so these steps may differ for other transmitter/receiver pairings, if you are interested in other pairings check this out. The Taranis QX7 and R-XSR Manuals are linked below for reference

Go to SETUP and create a new model and configure the the internal RF and external RF settings as shown below:

Go to the INPUTS page and add the following switch channels 05-08

This is how we layed out our switches, ultimately you can change them in QGC later anyways
Input settings - Select switch of your liking
Here is the switch layout for reference

Once you have the switches assigned to a channel, you can configure the switches in QGC in order to change flight modes, arm/disarm ect. We suggest using switch SH as a emergency kill switch.

Parameterize your channels to set flight modes and have a kill switch

Launching ROS Nodes

Assuming everything is properly downloaded, you can being launching ROS nodes. You may run into an error when launching a ROS node like "FCU: DeviceError:serial:open: Permission Denied". To remedy this, run

sudo chmod 666 /dev/ttyTHS2

ZED2 and ROS to RTSP

Following mtbsteve's 'Testing of the Installation' wiki page, you can test the ZED2 launcher and ROS to RTSP launcher.

# We are using the ZED2, so zed2.launch is needed
roslaunch zed_wrapper zed2.launch 

# In another terminal
roslaunch ros_rtsp rtsp_streams.launch

In order to view the RTSP stream in QGroundControl on the Host PC, navigate to General Settings > Video. Choose RTSP Video Stream and set the RTSP URL to 'rtsp://10.42.0.1:8554/<mountpoint>'

Mountpoints can be configured in ~/catkin_ws/src/ros_rtsp/config/stream_setup.yaml by following the instructions here. For initial testing, just use the /zedimage mountpoint. If any changes are made to the catkin workspace, the workspace must be rebuilt; for example

# Make sure you are in the catkin_ws/ folder
catkin build ros_rtsp

MAVROS

To test MAVROS and the connection between the flight controller and the Host PC, run the px4 controller node. QGC should connect to the drone.

roslaunch px4_controller mavros_controller.launch

Darknet-YOLO

To test the Darknet-YOLO object detection node, run the following

roslaunch zed_wrapper zed2.launch
# In another terminal
roslaunch darknet_ros darknet_ros.launch

You can view this in RVIZ or via RTSP on the Host machine by changing the mountpoint to /zedyolo. This will output a bounding box over the zed image. It's not very accurate.

stereoDNN

As stated before, stereoDNN is not used specifically with Trailnet, but allows the integration of depth for other parts of the project. To test the stereoDNN, run the following

cd ~/redtail/stereoDNN
./bin/nvstereo_tests_debug ./tests/data

Assuming the tests pass, you can do a test run of each of the different DNNs on a set of sample images. The output of each run can be found at ./bin/disp.bin

./bin/nvstereo_sample_app_debug nvsmall 513 161 ./models/NVTiny/TensorRT/trt_weights.bin ./sample_app/data/img_left.png ./sample_app/data/img_right.png ./bin/disp.bin

./bin/nvstereo_sample_app_debug resnet18_2D 513 257 ./models/ResNet-18_2D/TensorRT/trt_weights_fp16.bin ./sample_app/data/img_left.png ./sample_app/data/img_right.png ./bin/disp.bin fp16

./bin/nvstereo_sample_app_debug resnet18_2D 513 257 ./models/ResNet-18_2D/TensorRT/trt_weights.bin ./sample_app/data/img_left.png ./sample_app/data/img_right.png ./bin/disp.bin fp32

To see the output from the ZED2, run the following

roslaunch stereo_dnn_ros zed_resnet18_2D_fp16.launch

The output can be seen in RVIZ by adding the stereo_dnn_ros's image topic

Image topic of stereo_dnn_ros in RVIZ

mtbsteve created a node that displays the left and right images as well as the stereoDNN's output and a color coded version of the stereoDNN's output based on the KITTI color scheme. This can be seen in RVIZ by running the following

roslaunch stereo_dnn_ros ap_zed_resnet18_2D_fp16.launch
# In another terminal
roslaunch stereo_dnn_ros_viz ap_debug_viz.launch
Image topic from stereo_dnn_ros_viz in RVIZ

Running Everything

To run everything, minus the Autonomous Controller, you can run

roslaunch caffe_ros everything.launch

Looking in this .launch file, you will see that it is running lots of different nodes: ZED2, Trailnet, YOLO, stereoDNN, MAVROS, ROS to RTSP, and redtail_debugger. If you want to run Darknet-YOLO on top of this, open another terminal and run it as shown in the previous section.

Arrow Overlay modification

For a more visual representation of what Trailnet is trying to do, an arrow overlay can be added to any of the streamed topics for RTSP, so it can be seen in QGC. All changes are in the folder ~/catkin_ws/packages/redtail_debug/. In CMakelists.txt, OpenCV and cv-bridge dependencies were added. In package.xml, the cv_bridge dependency was added. Most of the modification was in the /src/redtail_debug_node.cpp file and was commented to help with both identification and understanding.

In order to do this, we piggybacked off of the redtail_debugger. In simple, we subscribed to the /zed2/zed_node/left/image_rect_color topic. Once subscribed, we can use cv_bridge's toCvCopy() in the cameraCallback() function to bridge between a ROS message to an OpenCV image.

// Taken from cv_brdige tutorial
static cv_bridge::CvImagePtr cv_img;

void cameraCallback(const sensor_msgs::ImageConstPtr& msg) {

    cv_bridge::CvImagePtr cv_ptr;
    try {
      cv_ptr = cv_bridge::toCvCopy(msg, sensor_msgs::image_encodings::BGR8);
    } catch (cv_bridge::Exception& e) {
      ROS_ERROR("cv_bridge exception: %s", e.what());
      return;
    }
    cv_img = cv_ptr;
}

This allows us to use any of the OpenCV functions (in this case arrowedLine()) to modify the image.

    // Draw arrow on image given angle from trails_dnn
int line_thickness = 3;
int line_length = 100;
float shift_multiplier = 50.0; // shift_by is a -1.0 to 1.0 float

    // Cast shift_by to int so the arrow can be shifted. Change shift_multiplier to
    // change the range, ie, 50x means -50 to 50 pixel shift
int arrow_shift = static_cast<int>(shift_by * shift_multiplier); 

    // Create first point of the arrow halfway across screen (+/- shift amount), 3/4 down 
cv::Point point1 = cv::Point(cv_img->image.cols/2 - arrow_shift, 3 * cv_img->image.rows/4);
    // end point of arrow depends on point1 and trig based on angle previously determined
cv::Point point2 = cv::Point(point1.x + (line_length*std::sin(angle)), point1.y - (line_length*std::cos(angle)));

    // Target point1
cv::Point point3 = cv::Point(cv_img->image.cols/2, 3 * cv_img->image.rows/4);
    // Target point2
cv::Point point4 = cv::Point(cv_img->image.cols/2, point1.y - line_length);

    // Target line
cv::arrowedLine(cv_img->image, point3, point4, cv::Scalar(225, 225, 0), line_thickness);

    // Current line
cv::arrowedLine(cv_img->image, point1, point2, cv::Scalar(0, 0, 255), line_thickness);

    // Publish modified image
image_output_pub.publish(cv_img->toImageMsg());

Then we can convert the OpenCV image back to a ROS message with toImageMsg() and publish the topic.

In order to change the topic you want the arrow overlay to subscribe to, you can change the first argument for the subscription found on line 69. This will require the catkin_ws/ to be rebuilt each time it is changed. Another option (probably better) would be to add another nh.param like on line 53, which can be changed in the launch file that starts the redtail_debug node ('caffe_ros everything.launch' for example).

// This is for the TrailNet overlay for the RTSP stream
// The subscriber topic can be changed if the Darknet-YOLO topic is desired
// If you decide to change the published topic name, make sure to update the
// RTSP .yaml file to match, found at ~/ws/src/ros_rtsp/config/stream_setup.yaml
img_sub = nh.subscribe<sensor_msgs::Image>("/zed2/zed_node/left/image_rect_color", 1, cameraCallback);
image_output_pub = nh.advertise<sensor_msgs::Image>("network/image_with_arrow", queue_size);

This arrow overlay is still a work in progress. The algorithm for both the angle and offset may need to be tweaked.

Autonomous Flight

Once you have everything running (i.e. roslaunch caffe_ros everything.launch) and you want to start autonomous flight, you should run the following

roslaunch px4_controller robot_controller.launch

THIS SHOULD BE TESTED WITHOUT PROPELLERS FIRST

This will take over the drone and put the drone into offboard mode (QGC should tell you this). The drone will then take off and hover, awaiting input to begin autonomous travel. This process is more thoroughly described in the Field Testing page.

Last updated

Was this helpful?