# Software Setup

## Overview

{% hint style="info" %}
Important notes before moving further into testing additional packages
{% endhint %}

It is not necessary to run the Stereo DNN in order to test the functionality of the TrailNet DNN for path navigation. Below is the rqt\_graph that shows the flow of subscribing and publishing of topics between nodes. It is advantageous to validate the ability of the drone to carry out just the navigation package tasks on its own because it is less likely that the TX2 will overheat or processes will crash if both the TrailNet DNN and the Stereo DNN were running at the same time. From our experience, modifications to the frame are necessary in order to install the heatsync and fan (or fans) in order to keep the TX2 below 60 degC. If the board goes past 60 degC it will automatically reduce its clock speed in order to prevent damage, thus artifically reducing the true capabilities of the Jetson. We want to eventually combine several packages and perform even more complex navigation techniques, so we suggest making the frame modifications suggested in the "Hardware Setup" section if you want to test beyond this point.&#x20;

![Fully operational RQT\_GRAPH for TrailNet Navigation with YOLO for Object Detection](https://lh3.googleusercontent.com/jDExDujfXxSIaW4IZ1DBozM03c79gbT_1-zYXC_dIXHRliU7srXZOTRddp0HdP4D2N6rp4wjUotfMdNjAvc9lylRwhJbw4ebZyLeFKCAIq5HC6eP1e5ehRgA_R-YKtCRuDtiibwSVdE)

## Jetpack 4.2.2 installation instructions

Although these steps are for Jetpack 4.2.2 the same process can be followed for Jetpack 4.3 just make sure you flash the correct carrier board firmware for that version of Jetpack.&#x20;

Download the [Nvidia SDK Manager](https://developer.nvidia.com/nvidia-sdk-manager)

Download the [J120 Firmware 2.2](https://auvidea.eu/firmware/) for the TX2

or

Download the [Obritty Firmware](http://connecttech.com/product/orbitty-carrier-for-nvidia-jetson-tx2-tx1/)&#x20;

Start the SDK Manager

```bash
sdkmanager
```

Select Jetpack 4.2.2 and make sure you have the host machine unchecked.&#x20;

![In this case it will be Jetpack 4.2.2](https://lh6.googleusercontent.com/ib7yndWI0S9YGbAZ3hpptNI6jocUCtLJEpsWTcCMLq4TGSA-Nt3C-UdMZec-AA0EoxBst0EN9dm3EPWo6zXTJeGAFhuL6Funy3GIY-KUdgRRt7rbNaMj8EtMq2ytWBMoamG1XJhzfBo)

First we need to populate the \~/nvidia/nvidia\_sdk folder.&#x20;

You need to **only check the Jetson OS** **components** for download and installation.

* Note you **cannot** choose the "Download now. Install later" option because we need the \~/nvidia/nvidia\_sdk files to be populated so we can migrate our firmware into these files for the J120.&#x20;
* We will install the SDK components later on.&#x20;
* Once the OS image is downloaded and the /nvidia/nvidia\_sdk folder is populated we can skip the flash and then navigate to the carrier board firmware directory.

Check the README file in the firmware folder, and verfiy that the correct paths and files are copied over into the /nvidia/nvidia\_sdk folder.&#x20;

For example the J120 would be something like this

```bash
cp -r ~/Downloads/J120_4.2/J120_kernel/* /home/<user>/nvidia/nvidia_sdk/JetPack_4.2_Linux_JETSON_TX2/Linux_for_Tegra/
cd /home/<user>/nvidia/nvidia_sdk/JetPack_4.2_Linux_JETSON_TX2/Linux_for_Tegra/
sudo ./apply_binaries.sh
```

after running the ./apply\_binaries.sh you should not get any errors, and then you are good to move to the next step.

Re-Flashing with the J120 firmware patch installed

* Now you need to turn off the TX2 on the J120 and boot back into recovery mode.
* connect a micro-usb cable to the J120 and to your computer (preferably the one that came with the TX2 that has the little green controller symbol on it).
* connect power to the J120 and the green LED should not come on indicating the OS has not booted.&#x20;
* then, you need to boot into recovery mode.&#x20;

  &#x20;\--> hold the REC (Recovery) button and then press the power button (still holding REC)

  &#x20;\--> press the reset button once while STILL holding the REC button and then wait 2 seconds until releasing the REC button.
* Now to verify the device is connected to the HOST PC you can run 'lsusb' and if NvidiaCorp shows up as a listed USB device you are set.&#x20;
* Another way to verify the TX2 has booted in recovery mode is to hook up the HDMI to a display, nothing should appear on the display if properly booted into recovery mode
* Then follow the sdkmanager steps as we did before. ONLY install the Jetson OS components for download and installation. NOT the sdk components.&#x20;
* It should indicate that the OS image is ready with a green check mark because we already populated and flashed before, but now we modified that image and it will re-flash.

OS Boot up and Set-up

* Once the sdkmanager has completed the flash you should see the TX2 boot up if you are connected to an HDMI.&#x20;
* verify that the J120 patch worked by checking to see if both the top and bottom USB ports are working.&#x20;
* note that only the upper USB port is USB3, the bottom is USB2.&#x20;
* also **note that the micro-usb port does not work** so you wont see it appear using lsusb anymore, only in forced recovery mode will it appear using lsusb. You cant use this port in normal operation.&#x20;
* while we are here we can configure our username/password then once those basics are finished on the setup wizard you can move to the next step.

Installing the SDK components from sdkmanager

* Now this next step was traditionally done using an ethernet connection but now they have updated it to where the usb to micro-usb connection acts as usb-ethernet.&#x20;
* If you remember, **our carrier board disabled the micro-usb port** and in order to install the SDK components we **cannot** be in forced recovery mode. That mode is only used for flashing!!
* So here is where it gets silly
* Power down the TX2 and remove it from the J120 carrier board.&#x20;
* Get out the old development board and install the TX2 onto there and make sure you bring your antenna's with you because we need to connect to the internet.&#x20;
* Once you have everything moved over, boot up and login
* open a terminal and run

```bash
sudo apt-get install update && sudo apt-get install upgrade
```

* Now connect the micro-usb to your HOST PC and run the sdkmanager.&#x20;
* run ifconfig on the TX2 and your host machine to ensure the IP's are 192.168.55.1 and 192.168.55.100 respectively.&#x20;
* run lsusb and see if the NvidiaCorp device shows up, **it should**.&#x20;
* **ONLY check the sdk components for download and installation**.&#x20;
* after the download is complete a window will appear asking for your username and password with a defualt IP address that correlates to the serial connection you have over usb w/ the TX2.&#x20;
* continue the installation and hopefully since we already updated the packages earlier we wont have any failed packages. (Maintain internet connection during install just to be safe)

Complete

* run nvcc --version and see if CUDA is there!
* you can verify any other packages a swell, but that's all. Easy! ;D

### Install Jetsonstats

Jetsonstats is a great package that allows you to check all of the critical information and health on the TX2 quickly.&#x20;

```bash
sudo apt install python-pip

sudo -H pip install -U jetson-stats

#Reboot/Logout & Login

sudo jtop
```

here you can change the power mode to MAX N and start the jetson\_clocks service

Alternatively you can run

```bash
sudo nvpmodel -m 0
jetson_clocks
```

## Setting up ROS Melodic

```bash
sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list'

sudo apt-key adv --keyserver 'hkp://keyserver.ubuntu.com:80' --recv-key C1CF6E31E6BADE8868B172B4F42ED6FBAB17C654

sudo apt update

sudo apt install ros-melodic-desktop-full

echo "source /opt/ros/melodic/setup.bash" >> ~/.bashrc

source ~/.bashrc

sudo apt install python-rosdep python-rosinstall python-rosinstall-generator python-wstool build-essential

sudo rosdep init

rosdep update

#Project Dependencies
sudo apt-get install -y  ros-melodic-mavros ros-melodic-mavros-extras ros-melodic-joy python-catkin-tools tmux ros-melodic-tf2-geometry-msgs

sudo apt-get install -y gstreamer1.0-plugins-bad

sudo apt-get install -y libgstreamer1.0-dev gstreamer1.0-tools libgstreamer-plugins-base1.0-dev libgstreamer-plugins-good1.0-dev libyaml-cpp-dev

sudo apt-get install -y ros-melodic-camera-info-manager ros-melodic-camera-calibration-parsers ros-melodic-image-transport

sudo apt-get install -y ros-melodic-gscam




```

Now create your catkin workspace environment

```bash
mkdir -p ~/catkin_ws/src

cd ~/catkin_ws/

catkin_make
# or
catkin build

echo "source devel/setup.bash" >> ~/.bashrc

echo $ROS_PACKAGE_PATH

```

If you are completely new to ROS please take the time to read through these tutorials

{% embed url="<http://wiki.ros.org/ROS/Tutorials>" %}

If you want to really understand how ROS works and why it is so applicable in robotics applications read [this book](https://www.pishrobot.com/wp-content/uploads/2018/02/ROS-robot-programming-book-by-turtlebo3-developers-EN.pdf) ( at minimum the first 4-5 chapters)

### Installing ZED SDK

Download the ZED SDK <https://www.stereolabs.com/developers/release/>

```bash
cd ~/Downloads/
chmod +x ZED_SDK_Tegra_JP42_v3.2.1.run
./ZED_SDK_Tegra_JP42_v3.2.1.run
```

Its up to you what options you would like to install/not install

### Installing Redtail Package

```bash
cd ~/catkin_ws/src

git clone https://github.com/mtbsteve/redtail.git

#Clone modified packages
cd ~
git clone https://github.com/akrolic/Redtail_Extended_mod.git
cp -r ~/Redtail_Extended_mod/packages/ ~/catkin_ws/src/redtail/ros/packages

# build  nvstereo_interference library, sample application and tests
cd /usr/src/gtest
cmake CMakeLists.txt
make

cd $HOME/catkin_ws/src/redtail/stereoDNN
mkdir build
cd ./build
cmake -DCMAKE_BUILD_TYPE=Debug ..
make

cd ..
mkdir build_rel
cd ./build_rel/
cmake -DCMAKE_BUILD_TYPE=Release ..
make

#Test its working
./bin/nvstereo_tests_debug ./tests/data

mkdir $CATKIN_WS/src/redtail/ros/packages/stereo_dnn_ros/stereoDNN
ln -s $HOME/catkin_ws/src/redtail/stereoDNN/build $HOME/catkin_ws/src/stereo_dnn_ros/stereoDNN/
ln -s $HOME/catkin_ws/src/redtail/stereoDNN/lib $HOME/catkin_ws/src/stereo_dnn_ros/stereoDNN/
ln -s $HOME/catkin_ws/src/redtail/stereoDNN/sample_app $HOME/catkin_ws/src/stereo_dnn_ros/stereoDNN/

#Sample Application
cd ~/catkin_ws/src/redtail/stereoDNN
./bin/nvstereo_sample_app_debug nvsmall 513 161 ./models/NVTiny/TensorRT/trt_weights.bin ./sample_app/data/img_left.png ./sample_app/data/img_right.png ./bin/disp.bin

cd ~/catkin_ws/
catkin build
```

## Configure ROS Network and TX2 Hotspot

```bash
#For reference here is a list of your environment variables
#We will be changing some of them later on
printenv | grep ROS
```

Navigate to this&#x20;

```bash
cd /etc/modprobe.d/

sudo gedit bcmdhd.conf
```

Add the following to a new line in the file

```bash
options bcmdhd op_mode=2
```

{% hint style="warning" %}
Note if you want to connect to the internet, you will need to comment out this line in order to do so. This forces it to only be able to broadcast a hotspot connection.&#x20;
{% endhint %}

Although the screenshot below is from ubuntu 16.04, setting up a new wifi connection and setting the mode to hotspot on ubuntu 18.04 is similar.&#x20;

![](https://2257299444-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-MDFWl6FJNNzxuDA_xAi%2F-MDSAHjnM_3slbPkuMxN%2F-MDSEMc5rOi7ulvzvxJV%2Fimage.png?alt=media\&token=06856538-6dd0-43e5-bc8b-b991dc45c376)

Once you have made the changes to the bcmdhd.conf configuration file and created a new wifi connection you can restart the TX2. After it boots up you should notice the hotspot has started. You can check to see if you can connect from another local machine.

### Configure ROS Network Environment Variables

In this example lets assume that&#x20;

+Drone's IP: 10.42.0.1&#x20;

+Host PC IP: 10.42.0.2

On the TX2

```bash
export | grep ROS

ifconfig

echo "export ROS_MASTER_URI=http://10.42.0.1:11311" >> ~/.bashrc
echo "export ROS_IP=10.42.0.1" >> ~/.bashrc
```

On the Host PC (With ROS melodic installed using the same steps provided beforehand)

```bash
ifconfig

echo "export ROS_MASTER_URI=http://10.42.0.1:11311" >> ~/.bashrc
echo "export ROS_IP=10.42.0.2" >> ~/.bashrc
```

Now when you are connected to the TX2's hotspot from the Host PC you will be apart of the ROS network and can publish/subscribe or view information about topics/nodes etc.

## Installing Joystick Node on Host PC

Use these commands to install the joy package, start a ROS master and then run the joy node with a controller connected to the Host PC.&#x20;

```bash
sudo apt install ros-melodic-joy

roscore

rosrun joy joy_node _dev:=/dev/input/js0 & rostopic echo /joy

#Debugging
ls /dev/input/js*
#Try other js# incase you have connected multiple joysticks until you find the one
```

## Installing QGroundControl on the Host PC

Download the [AppImage here](https://s3-us-west-2.amazonaws.com/qgroundcontrol/latest/QGroundControl.AppImage)&#x20;

Following [this guide](https://docs.qgroundcontrol.com/en/getting_started/download_and_install.html)

```
sudo usermod -a -G dialout $USER
sudo apt-get remove modemmanager -y
sudo apt install gstreamer1.0-plugins-bad gstreamer1.0-libav gstreamer1.0-gl -y

chmod +x ./QGroundControl.AppImage
./QGroundControl.AppImage
```

## Transmitter / Receiver Configuration&#x20;

We are using the FrSky Taranis QX7 transmitter with the FrSky R-XSR SBUS 2.4GHz Micro receiver so these steps may differ for other transmitter/receiver pairings, if you are interested in [other pairings check this out](https://docs.px4.io/v1.9.0/en/getting_started/rc_transmitter_receiver.html). The Taranis QX7 and R-XSR Manuals are linked below for reference

{% embed url="<https://opentx.gitbooks.io/opentx-taranis-manual/content/model_setup.html>" %}

{% embed url="<https://www.frsky-rc.com/r-xsr/>" %}

Go to SETUP and create a new model and configure the the internal RF and external RF settings as shown below:

![](https://2257299444-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-MDFWl6FJNNzxuDA_xAi%2F-MDVA43QsWPzKxxncHs3%2F-MDVQDown0zRern7Kq3_%2Fimage.png?alt=media\&token=f6613708-ed28-4127-a56e-7ce4242c3596)

Go to the INPUTS page and add the following switch channels 05-08

![This is how we layed out our switches, ultimately you can change them in QGC later anyways](https://2257299444-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-MDFWl6FJNNzxuDA_xAi%2F-MDWV_4dQDJGn_nhS3uX%2F-MDW_2GFiNl0nSpK8zZt%2FIMG_0719.jpg?alt=media\&token=f79f384c-f17a-40f3-8717-9bd9c2d7a3f7)

![Input settings - Select switch of your liking](https://2257299444-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-MDFWl6FJNNzxuDA_xAi%2F-MDWV_4dQDJGn_nhS3uX%2F-MDW_v8XdOt4eaW1MTh6%2FIMG_0720.jpg?alt=media\&token=62bd98d0-290e-48f0-8d28-2aa34d3acaf6)

![Here is the switch layout for reference](https://2257299444-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-MDFWl6FJNNzxuDA_xAi%2F-MDWV_4dQDJGn_nhS3uX%2F-MDW_s38ZRzGYINobqfc%2Fimage.png?alt=media\&token=dd6f69b4-5ff4-410d-8968-2bd2962b56a2)

Once you have the switches assigned to a channel, you can configure the switches in QGC in order to change flight modes, arm/disarm ect. We suggest using switch SH as a emergency kill switch.

![Parameterize your channels to set flight modes and have a kill switch](https://2257299444-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-MDFWl6FJNNzxuDA_xAi%2F-MDWV_4dQDJGn_nhS3uX%2F-MDWaoNIACzKqwjy6tMb%2Fimage.png?alt=media\&token=b0e6a3d3-345f-446d-9488-0762664020cd)

{% embed url="<https://docs.qgroundcontrol.com/en/SetupView/FlightModes.html>" %}

## &#x20;Launching ROS Nodes

Assuming everything is properly downloaded, you can being launching ROS nodes. You may run into an error when launching a ROS node like "FCU: DeviceError:serial:open: Permission Denied". To remedy this, run

```bash
sudo chmod 666 /dev/ttyTHS2
```

### ZED2 and ROS to RTSP

Following mtbsteve's ['Testing of the Installation' wiki page](https://github.com/mtbsteve/redtail/wiki/Testing-of-the-Installation), you can test the ZED2 launcher and ROS to RTSP launcher.

```bash
# We are using the ZED2, so zed2.launch is needed
roslaunch zed_wrapper zed2.launch 

# In another terminal
roslaunch ros_rtsp rtsp_streams.launch
```

In order to view the RTSP stream in QGroundControl on the Host PC, navigate to General Settings > Video. Choose RTSP Video Stream and set the RTSP URL to 'rtsp\://10.42.0.1:8554/\<mountpoint>'

![](https://lh4.googleusercontent.com/mAK56d7cdEfj5MMe6Rjt8Lnfq27RjJB0rs-3DFnoAWL0Xh98ZnBTwKLXXXpwY4I0fg6HRdkHhYUNG78ICBIKZ_DApsQaKkLOONc0zuI9kyFSzzCDa66b5qfo1HKeXMJd9nKg9DuB4G4)

Mountpoints can be configured in \~/catkin\_ws/src/ros\_rtsp/config/stream\_setup.yaml  by following the [instructions here](https://github.com/mtbsteve/redtail/wiki/Setup-of-the-TX2,-ZED-and-Host-PC#stream-video-from-the-ros-image-nodes). For initial testing, just use the /zedimage mountpoint. If any changes are made to the catkin workspace, the workspace must be rebuilt; for example

```bash
# Make sure you are in the catkin_ws/ folder
catkin build ros_rtsp
```

### MAVROS

To test MAVROS and the connection between the flight controller and the Host PC, run the px4 controller node. QGC should connect to the drone.

```bash
roslaunch px4_controller mavros_controller.launch
```

### Darknet-YOLO

To test the Darknet-YOLO object detection node, run the following

```bash
roslaunch zed_wrapper zed2.launch
# In another terminal
roslaunch darknet_ros darknet_ros.launch
```

You can view this in RVIZ or via RTSP on the Host machine by changing the mountpoint to /zedyolo. This will output a bounding box over the zed image. It's not very accurate.

### stereoDNN

As stated before, stereoDNN is not used specifically with Trailnet, but allows the integration of depth for other parts of the project. To test the stereoDNN, run the following

```bash
cd ~/redtail/stereoDNN
./bin/nvstereo_tests_debug ./tests/data
```

Assuming the tests pass, you can do a test run of each of the different DNNs on a set of sample images. The output of each run can be found at ./bin/disp.bin

```bash
./bin/nvstereo_sample_app_debug nvsmall 513 161 ./models/NVTiny/TensorRT/trt_weights.bin ./sample_app/data/img_left.png ./sample_app/data/img_right.png ./bin/disp.bin

./bin/nvstereo_sample_app_debug resnet18_2D 513 257 ./models/ResNet-18_2D/TensorRT/trt_weights_fp16.bin ./sample_app/data/img_left.png ./sample_app/data/img_right.png ./bin/disp.bin fp16

./bin/nvstereo_sample_app_debug resnet18_2D 513 257 ./models/ResNet-18_2D/TensorRT/trt_weights.bin ./sample_app/data/img_left.png ./sample_app/data/img_right.png ./bin/disp.bin fp32
```

To see the output from the ZED2, run the following

```bash
roslaunch stereo_dnn_ros zed_resnet18_2D_fp16.launch
```

The output can be seen in RVIZ by adding the stereo\_dnn\_ros's image topic

![Image topic of stereo\_dnn\_ros in RVIZ](https://2257299444-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-MDFWl6FJNNzxuDA_xAi%2F-MDW5obQiNLP8eDf2D3y%2F-MDW8m7zDg_HkxpbguZt%2Fimage.png?alt=media\&token=76dd1455-3054-459d-8ed9-6df9c6218b26)

mtbsteve created a node that displays the left and right images as well as the stereoDNN's output and a color coded version of the stereoDNN's output based on the KITTI color scheme. This can be seen in RVIZ by running the following&#x20;

```bash
roslaunch stereo_dnn_ros ap_zed_resnet18_2D_fp16.launch
# In another terminal
roslaunch stereo_dnn_ros_viz ap_debug_viz.launch
```

![Image topic from stereo\_dnn\_ros\_viz in RVIZ](https://2257299444-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-MDFWl6FJNNzxuDA_xAi%2F-MDW5obQiNLP8eDf2D3y%2F-MDW9mmcj1qB5XEElJ4j%2Fimage.png?alt=media\&token=518744c5-cb40-47ba-847e-eb21dd3c74d7)

### Running Everything

To run everything, minus the Autonomous Controller, you can run

```bash
roslaunch caffe_ros everything.launch
```

Looking in this .launch file, you will see that it is running lots of different nodes: ZED2, Trailnet, YOLO, stereoDNN, MAVROS, ROS to RTSP, and redtail\_debugger. If you want to run Darknet-YOLO on top of this, open another terminal and run it as shown in the previous section.&#x20;

### Arrow Overlay modification

For a more visual representation of what Trailnet is trying to do, an arrow overlay can be added to any of the streamed topics for RTSP, so it can be seen in QGC. All changes are in the folder \~/catkin\_ws/packages/redtail\_debug/. In [CMakelists.txt](https://github.com/akrolic/Redtail_Extended_mod/blob/master/packages/redtail_debug/CMakeLists.txt), *OpenCV*  and *cv-bridge*  dependencies were added. In [package.xml](https://github.com/akrolic/Redtail_Extended_mod/blob/master/packages/redtail_debug/package.xml), the *cv\_bridge*  dependency was added. Most of the modification was in the [/src/redtail\_debug\_node.cpp](https://github.com/akrolic/Redtail_Extended_mod/blob/master/packages/redtail_debug/src/redtail_debug_node.cpp) file and was commented to help with both identification and understanding.

In order to do this, we piggybacked off of the redtail\_debugger. In simple, we subscribed to the **/zed2/zed\_node/left/image\_rect\_color** topic. Once subscribed, we can use *cv\_bridge*'s toCvCopy() in the cameraCallback() function to bridge between a ROS message to an *OpenCV*  image.

```cpp
// Taken from cv_brdige tutorial
static cv_bridge::CvImagePtr cv_img;

void cameraCallback(const sensor_msgs::ImageConstPtr& msg) {

    cv_bridge::CvImagePtr cv_ptr;
    try {
      cv_ptr = cv_bridge::toCvCopy(msg, sensor_msgs::image_encodings::BGR8);
    } catch (cv_bridge::Exception& e) {
      ROS_ERROR("cv_bridge exception: %s", e.what());
      return;
    }
    cv_img = cv_ptr;
}
```

This allows us to use any of the *OpenCV*  functions (in this case arrowedLine()) to modify the image.&#x20;

```cpp
    // Draw arrow on image given angle from trails_dnn
int line_thickness = 3;
int line_length = 100;
float shift_multiplier = 50.0; // shift_by is a -1.0 to 1.0 float

    // Cast shift_by to int so the arrow can be shifted. Change shift_multiplier to
    // change the range, ie, 50x means -50 to 50 pixel shift
int arrow_shift = static_cast<int>(shift_by * shift_multiplier); 

    // Create first point of the arrow halfway across screen (+/- shift amount), 3/4 down 
cv::Point point1 = cv::Point(cv_img->image.cols/2 - arrow_shift, 3 * cv_img->image.rows/4);
    // end point of arrow depends on point1 and trig based on angle previously determined
cv::Point point2 = cv::Point(point1.x + (line_length*std::sin(angle)), point1.y - (line_length*std::cos(angle)));

    // Target point1
cv::Point point3 = cv::Point(cv_img->image.cols/2, 3 * cv_img->image.rows/4);
    // Target point2
cv::Point point4 = cv::Point(cv_img->image.cols/2, point1.y - line_length);

    // Target line
cv::arrowedLine(cv_img->image, point3, point4, cv::Scalar(225, 225, 0), line_thickness);

    // Current line
cv::arrowedLine(cv_img->image, point1, point2, cv::Scalar(0, 0, 255), line_thickness);

    // Publish modified image
image_output_pub.publish(cv_img->toImageMsg());
```

Then we can convert the OpenCV image back to a ROS message with toImageMsg() and publish the topic.&#x20;

In order to change the topic you want the arrow overlay to subscribe to, you can change the first argument for the subscription found on line 69. This will require the catkin\_ws/ to be rebuilt each time it is changed. Another option (probably better) would be to add another nh.param like on line 53, which can be changed in the launch file that starts the redtail\_debug node ('caffe\_ros everything.launch' for example).

```cpp
// This is for the TrailNet overlay for the RTSP stream
// The subscriber topic can be changed if the Darknet-YOLO topic is desired
// If you decide to change the published topic name, make sure to update the
// RTSP .yaml file to match, found at ~/ws/src/ros_rtsp/config/stream_setup.yaml
img_sub = nh.subscribe<sensor_msgs::Image>("/zed2/zed_node/left/image_rect_color", 1, cameraCallback);
image_output_pub = nh.advertise<sensor_msgs::Image>("network/image_with_arrow", queue_size);
```

This arrow overlay is still a work in progress. The algorithm for both the angle and offset may need to be tweaked.&#x20;

### Autonomous Flight

Once you have everything running (i.e. roslaunch caffe\_ros everything.launch) and you want to start autonomous flight, you should run the following

```bash
roslaunch px4_controller robot_controller.launch
```

**THIS SHOULD BE TESTED WITHOUT PROPELLERS FIRST**

This will take over the drone and put the drone into offboard mode (QGC should tell you this). The drone will then take off and hover, awaiting input to begin autonomous travel. This process is more thoroughly described in the Field Testing page.
