Anda di halaman 1dari 101

DRONE PROGRAMMING MANUAL MODULE

Constructed & Arranged By:

Clay Hinskie

Jeremiah Edbert Grifith Sihite

Keshia Poedjiono

Stanley Nathanael Wijaya

UPH
2023
Tangerang
ABSTRACT

A drone which is also regarded as an unmanned aerial vehicle (UAV) is an aircraft


consisting of no human pilot on board, hence the name ‘unmanned’. Drones are controlled
through a ground-based controller constructed through a computerised program. In this case,
the program can be manufactured by various different methods. In accordance with the
module, the method incorporated to control a drone co/\nstructed by an assembled program
embraces the role of Python as a route to articulate the user’s vision towards a drone. Python
as a programming language, is a simple yet efficient method that supports the applied
purpose of the module. The module is composed by various clear instructions and guided
tutorials to usher the reader/user towards a similar goal. In spite of what preceded, the
devised module has gone through trials and errors to ensure the user may receive the best
results in achieving the concordant purpose of drone programming.

Keyword: Drone, Programming, Python


Reference: 3 Articles + 2 Journal

ABSTRAK

Drone atau yang disebut juga sebagai unmanned aerial vehicle (UAV) adalah pesawat
yang tidak memiliki penerbang manusia di dalamnya, oleh karena itu drone dinamai 'tak
berawak'. Drone dikendalikan melalui pengendali berbasis darat yang dibangun melalui
program komputerisasi. Dalam hal ini, program dapat dibentuk dengan berbagai metode.
Sesuai dengan modul ini, metode yang digunakan untuk mengendalikan drone yang dibangun
oleh program rakitan mencakup peran Python sebagai cara untuk mengartikulasikan visi
pengguna terhadap drone. Python sebagai bahasa pemrograman, adalah metode yang
sederhana namun efisien yang dapat mendukung tujuan penerapan modul. Modul ini disusun
oleh instruksi yang jelas dan tutorial yang dipandu untuk mengantarkan pembaca/pengguna
menuju tujuan yang sama. Terlepas dari hal tersebut, modul yang dirancang telah melalui uji
coba untuk memastikan pengguna dapat menerima hasil terbaik dalam mencapai tujuan yang
sesuai dari pemrograman drone.

Keyword: Drone, Pemrograman, Python


Reference: 3 Artikel + 2 Jurnal
DJI TELLO DRONE MODEL SPECIFICATION
LIST OF CONTENTS

ABSTRACT / ABSTRAK

DJI TELLO MODEL SPECIFICATION

CHAPTER I

INSTALLATION
1.1 Python
1.1.1 Definition of Python
1.1.2 How to Install Python
1.2 Github
1.2.1 Definition of Github
1.2.2 How to Create Github Account
1.3 Copilot
1.3.1 Definition of Copilot
1.3.2 How to get Copilot for Student
1.4 Visual Studio Code (VSC)
1.4.1 How to Download Visual Studio Code
1.5 Connect Python to Visual Studio Code
1.5.1 Download Python Extension
1.5.2 Test Python in Visual Studio Code
1.6 How to Install PIP for DJI Tello
1.6.1 How to install DJI Tello using pip
1.7 Connect Github Copilot into Visual Studio Code

CHAPTER II
SIMPLE COMMAND
2.1 Library and Object
2.2 Connect
2.3 Take-off & Landing
2.4 Simple Movements
2.5 Rotation
2.6 Battery Monitoring
2.7 DJI Tello User Guide and All Commands

CHAPTER III

FLYING PATH

3.1 Simple Flying Path

3.2 Rectangle Flight Path

3.3 Square Flight Path

3.4 Triangle Flight Path

3.5 Zig Zag Flight Path

3.6 Circle Flight Path

3.7 Simulate a Bouncing Ball

3.8 Up and Down Flying Path

3.9 Circus Flying Path

3.10 Control the Drone using Keyboard and Take Picture using Camera

3.11 Recording Video

3.12 Tips for Making a Flying Path and Play with the Drone

CHAPTER IV

SENSORS

4.1 Camera

4.1.1 Definition of Camera

4.1.2 Control the Camera

4.1.3 Example of Controlling Camera Inside The Tello Drone


4.2 Infrared Sensor

4.2.1 Definition of Camera

4.2.2 Control the Camera

4.2.3 Example of Controlling Infrared Sensor Inside The Tello Drone

4.3 Proximity Sensor

4.3.1 Definition of Proximity Sensor

4.3.2 Control the Proximity Sensor

4.3.3 Example of Controlling Proximity Sensor Inside The Tello Drone

CHAPTER V

ARTIFICIAL INTELLIGENCE

5.1 Artificial Intelligence

5.1.1 Machine Learning

5.1.2 Deep Learning

5.2 More about Dataset and Machine Learning Model

CHAPTER VI

MACHINE LEARNING

6.1 OpenCV Cascade Classifier

6.1.1 Haar Cascade Classifier

6.1.2 How to Install Haar Cascade

6.2 Programs with Haar Cascade


6.2.1 Control Face Detection and Movement with Keyboard
6.2.2 Face Detection Commanded Movement
6.2.3 Face Tracking & Follow Program
6.3 Roboflow
6.3.1 Create Custom Dataset Manually
6.3.2 Find and Use a Custom Dataset on Roboflow Universe
6.3.3 More Things about Roboflow
6.4 Kaggle
6.4.1 Getting Started on Kaggle
6.4.2 Find a Dataset on Kaggle
6.5 YOLOv5
6.5.1 What is YOLOv5?
6.5.2 How to install YOLOv5?
6.6 Google Colaboratory
6.6.1 What is Google Colab?
6.6.2 How to use YOLOv5 on Google Colab as Environment
6.7 Jupyter Notebook
6.8 DJI Tello Mask Detection Program
6.9 Object Detection Using YOLOv5

CHAPTER VII

SIMULATION PROJECT

7.1 Simulation for Search and Rescue Project

7.1.1 Scenario for the Simulation Project

7.2.1 DJI Tello Drone’s Program for Search and Rescue Simulation Project

BIBLIOGRAPHY

ATTACHMENT

CHAPTER I
INSTALLATION

1.1 Python

1.1.1 Definition of Python


Python is a computer programming language often used to build websites,
software, automate tasks, machine learning, and data analysis. Python is a
general-purpose language, meaning it can be used to create a variety of different
programs and isn't limited to any specific problems. Python is widely considered to be
one of the easiest programming languages to learn. As such, it is an excellent choice
for beginners who are just starting to learn programming. Other than that, in
programming this drone, Python will be the main programming language that will be
used, because users can also add various additional libraries to make coding easier.
Apart from that, Python itself also has a library for this drone, which is using DJI
Tello.

1.1.2 How to Install Python


This is step by step to install Python programming language:

1. Go to the official website of Python: https://www.python.org/

2. Click the “download” section and choose the Operating System (OS) for
Windows or macOS
3. Select and Download Python Executable Installer

For Windows, you can click the name of the Python version. We highly
recommend downloading the latest version. Click “Python 3.10.11 - April 5,
2023”

For macOS, you can also download the latest version of Python. Click,
“Latest Python 3 Release - Python 3.11.3”
4. Run Executable Installer

After you have downloaded the installer then just run the installer. Make sure
to select both the checkboxes at the bottom and then click “Install Now”

On clicking the “Install Now”, the installation process will start. The
installation process will take a few minutes to complete.

Once the installation is successful, the following screen is displayed:


5. Verify Python is installed on Windows

To ensure if Python is successfully installed on your operating system.


Please follow this step:

1. Open the command prompt (terminal).

2. Type ‘python’ there and press enter.

3. The version of the python which you have installed will be displayed if
the python is successfully installed on your windows.

6. Verify PIP was installed

Pip is a powerful package management system for Python software packages.


So, make sure that you have it installed.
To verify if pip was installed, follow these step below:

1. Open the command prompt (terminal).


2. Enter pip –V to check if pip was installed.
3. The following output appears if pip is installed successfully.

1.2 Github

1.2.1 Definition of Github


Github is a code hosting platform that is used for storing, tracking, and
collaborating on software projects. It helps individuals or teams to use Git for version
control and collaboration easier. In Github, users can code independently or
collaborate with others in real time. Github users’ are also able to create accounts and
share their coding projects.

1.2.2 How to Create a Github Account


1. Go to the Github webpage: https://github.com/

2. Enter your email address then set a password and username


3. Follow the next instruction given
4. You will be directed to the homepage.

1.3 Copilot

1.3.1 Definition of Copilot


GitHub Copilot is an AI pair programmer that helps you write code faster and
with less work. It draws context from comments and code to suggest individual lines
and whole functions instantly. GitHub Copilot is powered by OpenAI Codex, a
generative pretrained language model created by OpenAI. It is available as an
extension for Visual Studio Code, Visual Studio, Neovim, and the JetBrains suite of
integrated development environments (IDEs). Copilot is used to help the user to
create more complex code without using much effort. Copilot will be important
especially in the later stages of the program, where user will use machine learning and
AI in the program.

1.3.2 How to get Copilot for Student


1. Go to : https://education.github.com/pack on your phone (recommended) or
other device.

2. Click on “Sign up for Student Developer Pack”

3. Scroll down and click on “Get student benefits”

4. Click on add an email address and click on the blue highlighted text to add
your personal school email. (Skip if you don’t have school email)

5. Go back to the page before and refresh. Click your school email and fill in the
name of your school, and how you plan to use Github.
6. If your school isn’t registered, select your school below the map and fill in
details of your school. (Skip if you can find your school on the “What is the
name of your school” answer tab).

7. Take a picture of your student ID using your phone and select Student ID on
the proof type. You can also upload an image of your Student ID (Not
Recommended). You can also use other types of proof according to the options
given below.
8. You might be able to get access to the student package or get a 1-3 days notice
after submitting the proof.

9. After you have successfully gained the student developer package, go back to
https://github.com/

10. Click on your profile on the top right corner of the screen and click settings.

11. Click on Copilot on the left hand side of the page


12. Activate your Copilot in settings

1.4 Visual Studio Code (VSC)

Visual Studio Code (VSC) is a programming software intrinsically developed by


Microsoft which is available for Windows, macOS, Linux, and Raspberry Pi OS. Visual
Studio code is a light-weight ang yet efficient programming software offered to users for free
of no charge. Visual Studio Code provides various extensions into different programming
languages such as; Java, C++, C#, PHP, and Python, which will thoroughly be used according
to this module. Here’s how to get started with using Visual Studio Code to start drone
programming:

1.4.1 How to Download Visual Studio Code


The following is a step-by-step tutorial on how to Download and Install Visual Studio Code:

1. Download the software through the following link:


https://code.visualstudio.com/Download
2. Choose among the various options available to continue with the Download
Process, select the installer in accordance to the operating system of your device.

3. As the Download finishes, VSC Setup will appear in the downloads folder.

4. Select the VSC Icon to start the Installation Process.

5. As the Installer opens, Visual Studio Code will ask you to accept the terms and
conditions of the software. Select ‘I Accept the agreement’ and continue by selecting
the Next button.
6. Choose the location data to run Visual Studio Code. The program will ask you to
browse for a file location, then select the Next button.

7. VSC will then ask to begin the Installation process, select the Install button.
8. After the installation has finished, you can select the Finish button and run the
software to start writing your very own program.

1.5 Connect Python to Visual Studio Code

1.5.1 Download Python Extension

1. Open Visual Studio Code


2. Click the “Extension” section

3. Search “Python” in the search bar, like the image given below. Download the Python
extension by Microsoft by clicking install in that extension.
1.5.2 Test Python in Visual Studio Code

1. Create a new folder anywhere in your computer or laptop and name it “Python”, you
can freely give a name to the folder.

2. Open Visual Studio Code and then use this shortcut Ctrl + K + Ctrl + O to open New
Folder

3. Search the folder that you have created and it will look like this:

4. Then click the folder and create a new file, you can give any name to the file but, you
MUST put “.py” at the end of the file name. Ex: “Tutorial.py”
5. You can test if Python is already connected to the Visual Studio Code or not, by
giving a very simple command, “Print(“hello world!”)”

6. Then run the code by clicking this button or use a shortcut, “Ctrl + Alt + N”

7. Python has connected to the Visual Studio Code if it outputs, “hello world!”, like the
image given below
1.6 PIP DJI Tello
DJI Tello is a mini-drone which will be developed and programmed through python in
accordance with this module. The drone will be programmed by installing the library into
Python. The following is a guide to integrate the drone into python using pip.

1.6.1 How to install DJI Tello using pip


1. Open the command prompt (terminal).
2. Type “pip install djitellopy” and click enter.
3. Djitellopy will start downloading. The following texts will be shown if the
download was successful.

1.7 Connect Github Copilot to Visual Studio Code

1. Launch Visual Studio Code on your PC.

2. On the left side of the app, click on Extensions (Ctrl+Shift+X)


3. Search for “Github Copilot” in the marketplace

4. Install Github Copilot

5. If you have not previously authorised Visual Studio Code in your GitHub account,
you will be prompted to sign in to GitHub in Visual Studio Code.

6. In your browser, GitHub will request the necessary permissions for GitHub Copilot.
To approve these permissions, click Authorise Visual Studio Code.

7. To confirm the authentication, in Visual Studio Code, in the "Visual Studio Code"
dialog box, click Open.

8. After the Github Copilot connects to the Visual Studio Code, you can use the
extension in the VSC.
CHAPTER II

SIMPLE COMMANDS

2.1 Library and Object


Python Library is a collection of code that makes everyday tasks more efficient, so the
library helps a coder to code easily. It is a must to import the library before we start coding
the DJI Tello, because we need the library to code the DJI Tello easier and efficiently. This is
a required library to code a DJI Tello:

Import library code: from djitellopy import Tello

After we import the library to our code, we should define an object that we will use in
the coding. An object is simply a collection of data or variables and methods or functions. In
our coding, we can define an object as simple as we declare a variable, but this is an object.
We can give any name to our object and this is the code to define an object:

Define object format: variableName = Tello()

Variable names can be anything you want to define your Tello drone in a set of commands.

E.g. Define Object: tello = Tello()

2.2 Connect
Before the drone is available to be programmed, the device must first be connected
with the drone using simple codes available from the python library. The Tello drone could be
connected with the user’s device through a connection enabled with WiFi.

To connect the drone with the user’s device, the user must first connect the device
with the drone in accordance with the drone’s network name through WiFi. After turning on
the drone, connect the drone with the user’s device. After the device has been connected with
the drone, there are a few simple commands to continue on the program to be able to access
the drone.

Make a Connection format: variableNames.connect()

Creating a Connection: tello.connect()

2.3 Take-off & Landing


Take off and landing are 2 of the most simple yet important commands for the drone.
Without take off, your drone won’t be able to move around or even function at all. Similarly,
without landing, your drone will be stuck in the air and you won’t be able to land it safely.
To take off, simply use the command:
tello.takeoff() # Takeoff Command

To land the drone, simply use the command:

tello.land() # Land Command

2.4 Simple Movements


After taking off, we can give the Tello drone some simple movements. You can tell
the drone to move forward, backwards, left, right, up, and down.

To move forward, use the command :


tello.move_forward(x) # This command will move the drone forward x cm

To move backwards, use the command :


tello.move_back(x) # This command will move the drone backwards x cm

To move left, use the command :


tello.move_left(x) # This command will move the drone to the left x cm

To move left, use the command :


tello.move_right(x) # This command will move the drone to the right x cm

To move up, use the command:


tello.move_up(x) # This command will move the drone up x cm

To move down, use the command:


tello.move_down(x) # This command will move the drone down x cm

You can use the commands consecutively to make a simple flight path. Here is a
simple example of the commands:

tello.move_forward(30) # Moves forward 30cm


tello.move_back(30) # Moves backward 30cm
tello.move_right(30) # Moves right 30cm
tello.move_left(30) # Moves left 30cm
tello.move_up(30) # Moves up 30cm
tello.move_down(30) # Moves down 30cm

2.5 Rotation
The Tello drone can do left, right, or full rotation simply by using single commands.
This enables the drone to make movements in any direction. We can also adjust the rotation
degree.
To rotate left, use the command:
tello.rotate_counter_clockwise(90) # this command will rotate the drone 90 degree in a
counter clockwise direction

To rotate right, use the command:


tello.rotate_clockwise(90) # this command will rotate the drone 90 degree in a clockwise
direction

To do full rotation to the right, use the command:


tello.rotate_clockwise(360) # this command will do full rotation in a clockwise direction

To do full rotation to the left, use the command:


tello.rotate_counter_clockwise(360) # this command will do full rotation in a counter
clockwise direction

We can also custom the degree as desired, by using these commands:


tello.rotate_clockwise(x) # this command will rotate the drone x degree in a clockwise
direction

tello.rotate_counter_clockwise(x) # this command will rotate the drone x degree in a


clockwise direction

2.6 Battery Monitoring


Through the user’s device, python has the ability to display and monitor the drone’s
battery life using a single simple command. The command is assigned through the print()
function available in Pyth. To identify and monitor the drone’s battery, input the following
line of code.

Battery Status: print(tello.get_battery()) # display the information of the battery status in the
output section.

2.7 DJI Tello User Guide and All Commands


Besides every single command we provide above, in this section, we will provide the
user guide from the verified sources to help you learn more about DJI Tello, how to use it,
and all of the simple commands. We highly recommend you to read the user guide to know
more simple commands and how to use them. Not only read it, but we advise you to try it
also, so that you can apply the new thing that you have learned in real life and not just store it
for knowledge.

You can access the user guide by clicking or copying this link:
https://dl-cdn.ryzerobotics.com/downloads/Tello/Tello%20SDK%202.0%20User%20Guide.p
df
CHAPTER III

FLYING PATH

3.1 Simple Flight Path


A simple flying path is a flying path that the user can make to make a simple path for
the drone to take. It can be anything that takes the drone from point A to point B.

An example of a simple flying path:

from djitellopy import Tello

# Connect to Tello drone


tello = Tello() # Defines the Tello drone to tello
tello.connect() # Connects to the Tello drone

# Take off
tello.takeoff()

# Fly
tello.move_forward(50) # Moves 50cm Forward
tello.rotate_clockwise(90) # Rotates 90° Clockwise
tello.move_forward(50) # Movies 50cm Forward
tello.rotate_clockwise(90) # Rotates 90° Clockwise

# Land Tello
tello.land()

3.2 Rectangle Flight Path


Rectangle flying path is a flying path that traces a rectangle. Rectangle has 2 same
width and 2 same height, like the image given below:
To make a rectangle flying path (manually), you can follow this code:

This is a rectangle flying path with 30 cm height and 50 cm width

from djitellopy import Tello


tello = Tello()
tello.connect()

tello.takeoff()

tello.move_forward(30) # Moves forward 30cm


tello.rotate_counter_clockwise(90) # rotate 90 degrees counterclockwise
tello.move_forward(50) # Moves forward 50cm
tello.rotate_counter_clockwise(90) # rotate 90 degrees counterclockwise
tello.move_forward(30) # Moves forward 30cm
tello.rotate_counter_clockwise(90) # rotate 90 degrees counterclockwise
tello.move_forward(50) # Moves forward 50cm
tello.rotate_counter_clockwise(90) # rotate 90 degrees counterclockwise

tello.land()

You can also make use of the for command to make loops so that your code can be more
efficient, like this:

from djitellopy import Tello


tello = Tello()
tello.connect()

tello.takeoff()

for i in range(2):

tello.move_forward(30) #Moves forward 30cm


tello.rotate_counter_clockwise(90) #rotate 90 degrees
counterclockwise
tello.move_forward(50) # Moves forward 30cm
tello.rotate_counter_clockwise(90) # rotate 90 degrees
counterclockwise

tello.land()

3.3 Square Flight Path


Square flying path is a flying path that traces a square. Square has 4 sides that are
congruent or equal to each other. So there are several ways that can be used to make a square,
which are manually or using a loop.

To make a square flying path, you can use the following code:

from djitellopy import Tello // import library

tello = Tello()
tello.connect()

tello.takeoff()
tello.move_forward(30) # Move forward 30 cm
tello.rotate_clockwise(90) # Rotate 90 degrees clockwise
tello.move_forward(30) # Move forward 30 cm
tello.rotate_clockwise(90) # Rotate 90 degrees clockwise
tello.move_forward(30) # Move forward 30 cm
tello.rotate_clockwise(90) # Rotate 90 degrees clockwise
tello.move_forward(30) # Move forward 30 cm
tello.rotate_clockwise(90) # Rotate 90 degrees clockwise

tello.land()

You can also make use of the for command to make loops so that your code can be more
efficient, as shown below:

from djitellopy import Tello

tello = Tello()
tello.connect()
tello.takeoff()
for i in range(4):
tello.move_forward(30) # Move forward 30 cm
tello.rotate_clockwise(90) # Rotate 90 degrees clockwise

tello.land()

3.4 Triangle Flight Path


Triangle flying path is a flying path that traces a triangle. Triangle has 3 sides and has
its own uniqueness, because there are many types of triangles.
This code is an example to make a simple triangle:

from djitellopy import Tello


tello = Tello()
tello.connect()

tello.takeoff()

tello.rotate_clockwise(45)
tello.move_forward(50)
tello.rotate_clockwise(90)
tello.move_forward(50)
tello.rotate_clockwise(135)
tello.move_forward(71)
tello.rotate_clockwise(90)

tello.land()

3.5 Zig Zag Flight Path


The Zig Zag flying path follows a Z trace. This is an example of a Zig Zag drone
flight path:

from djitellopy import Tello

tello = Tello()
tello.connect()

tello.takeoff()
tello.rotate_clockwise(45)
tello.move_forward(50)
tello.rotate_counter_clockwise(90)
tello.move_forward(50)
tello.rotate_clockwise(90)
tello.move_forward(50)
tello.rotate_counter_clockwise(90)
tello.move_forward(50)

tello.land()

3.6 Circle Flight Path


A circle flying path is a flying path that traces a circle. Circle is a unique shape that
doesn’t have any side, besides that a circle is shaped in all directions, without any corners or
edges. When a flying object follows a circle's flying path, it moves in a circular motion,
always maintaining the same distance from the centre of the circle.
This code is an example to make a circle:
from djitellopy import Tello
import time

tello = Tello()
tello.connect()
tello.takeoff()

tello.move_up(20)

tello.send_rc_control(0,0,0,0)
time.sleep(0.1)
# Turns motors on:
tello.send_rc_control(-100,-100,-100,100)
time.sleep(2)
tello.send_rc_control(0,10,20,0)
time.sleep(3)
tello.send_rc_control(0,0,0,0)
time.sleep(2)

v_up = 0
for i in range(4):
tello.send_rc_control(40, -5, v_up, -35)
time.sleep(4)
tello.send_rc_control(0,0,0,0)
time.sleep(0.5)
tello.land()

tello.land()

3.7 Simulate a Bouncing Ball


This flying path imitates a bouncing ball which has an up and down flying pattern.
This is one of the example to simulate a bouncing ball:

from djitellopy import Tello

tello = Tello()
tello.connect()

tello.takeoff()

heightUp = 60 #you can change the height to move drone up


heightDown = 35 #you can change the height to move the drone down
for i in range(3):
tello.move_up(heightUp)
tello.move_down(heightDown)

heightUp -= 9 #the decrement the height of drone to moving up


heightDown += 8 #the increment the height of drone to moving down

tello.move_up(heightUp)

tello.land()

3.8 Up and Down Flying Path


This flying path will show the DJI Tello Drone move up and down, not by using the
move up and down command, but we can take off the drone and land it, then take off the
drone again. It’s possible to do that, we can use takeoff() and land() commands as much as
we need. This is an example for the code:

from djitellopy import Tello

tello = Tello()
tello.connect()

tello.takeoff()

for i in range(3):
tello.move_forward(100)
tello.land()
tello.takeoff()

tello.move_up(70)

tello.land()
tello.end()

3.9 Circus Flying Path


Not the same as the name, but in this section, we will use maybe a new command that
you didn’t know before, which is flip() command. Yes, we can flip the drone by using flip()
command. However, in this section, we will also introduce you to a new library, namely time.
We can use this library to make a delay command to our drone. This is an example for the
code:

# Import necessary libraries


from djitellopy import Tello
import time #library

# Connect to Tello drone


tello = Tello()
tello.connect()

tello.takeoff()

tello.flip_right() #flip command

tello.flip_forward() #flip command


time.sleep(2) #delay command

tello.flip_left() #flip command


time.sleep(0.5) #delay command

tello.flip_back() #flip command


time.sleep(1) #delay command

tello.land()
tello.end()

3.10 Control the Drone using Keyboard and Take Pictures using Camera
The DJI Tello Drone holds the ability to be controlled with the user’s device
(preferably laptop/computer) through the keyboard as programmed in Python. Controlling the
mini-drone using the keyboard enables the user to control the drone in accordance with how
the user has intended for. With this being said, the drone can be controlled according to the
various commands that Python offers. Which in this case, range from moving forward &
backwards, moving left & right, and also moving up & down. Besides that, in this section, we
also use the camera sensor within the Tello, we can see a picture using the camera while the
drone is under our control, and we also can take a picture when the camera position is stable
according to the drone’s position by pressing ‘p’ key. The picture will automatically be saved
to the same folder with your code’s folder.
Below is an example of a program constructed to control the Tello drone with the
keyboard and take pictures using the camera. Although the example is programmed in a
certain way by the input of selected keys, the user can rescript the program according to what
fits the user’s liking.

from djitellopy import Tello


import cv2

tello = Tello()
tello.connect()
tello.streamon()
frame_read = tello.get_frame_read()

count = 0 #Count the number of screenshot

tello.takeoff()

while True:

img = frame_read.frame
cv2.imshow("drone", img)

key = cv2.waitKey(1) & 0xff

if key == 27: # ESC


break
elif key == ord('p'):
cv2.imwrite("Screenshot " + str(count) + ".png", img)
count += 1

elif key == ord('w'):


tello.move_forward(30)
elif key == ord('s'):
tello.move_back(30)
elif key == ord('a'):
tello.move_left(30)
elif key == ord('d'):
tello.move_right(30)
elif key == ord('l'):
tello.rotate_clockwise(30)
elif key == ord('j'):
tello.rotate_counter_clockwise(30)
elif key == ord('u'):
tello.move_up(30)
elif key == ord('n'):
tello.move_down(30)

tello.end()
tello.land()

3.11 Recording Video


In this section, we will provide a simple code about how to record a video by using
DJI Tello Drone’s camera.

# import libraries

import time, cv2


from threading import Thread
from djitellopy import Tello

tello = Tello()
tello.connect()

# camera setup

keepRecording = True
tello.streamon()
frame_read = tello.get_frame_read()

# make a function or method

def videoRecorder():

# create a VideoWrite object, recording to ./video.avi

height, width, _ = frame_read.frame.shape


video = cv2.VideoWriter('Videos/video5.avi',
cv2.VideoWriter_fourcc(*'XVID'), 30, (width, height))
while keepRecording:
video.write(frame_read.frame)
time.sleep(1 / 30)

video.release()

# we need to run the recorder in a separate thread, otherwise blocking


options
# would prevent frames from getting added to the video

recorder = Thread(target=videoRecorder)
recorder.start()

# normal commands

tello.takeoff()
# these are the movement, you can change it whatever you want

tello.move_up(100)
tello.rotate_counter_clockwise(360)

# ends

tello.land()

keepRecording = False
recorder.join()

3.12 Tips for Making a Flying Path and Play with the Drone
To make a flying path, we need a creative mind to make it an amazing path. You can
also search for inspiration or find a resource on what flying path to make. Just think
creatively and outside the box that the flying path you make is different from the common
path. After you have an idea what flying path to make, you need to think and code it how the
drone will follow and fly according to that flying path. You can also find resources from
anywhere to help you get the logic to code a flying path that you have imagined. Just make
your freestyle move and any flying path that you want and imagine and code it, because there
is no wrong or true in making a flying path, just make it amazing.
CHAPTER IV

SENSORS

4.1 Camera
4.1.1 Definition of Camera
From the Merriam Webster Dictionary, camera is a device that consists of a
lightproof chamber with an aperture fitted with a lens and a shutter through which the
image of an object is projected onto a surface for recording (as on a photosensitive
film or an electronic sensor) or for translation into electrical impulses (as for
television broadcast). A camera is an optical instrument to capture still images or to
record moving images, which are stored in a physical medium such as in a digital
system or on photographic film. A camera consists of a lens which focuses light from
the scene, and a camera body which holds the image capture mechanism. Apart from
that, in this drone, DJI Tello also has a camera that can be used like any camera, but
the camera resolution is just 720p (720 pixel). However, this camera sensor has a
great impact and function for DJI Tello, which is we can use this camera sensor to
detect something, take photos, take video, use the camera to avoid obstacles in front
of the drone, and others.

4.1.2 Control the Camera


In this section, we will provide a simple manual to control the camera sensor
within Tello. We will provide a simple command with the explanation of its meaning.

import cv2

‘’’
this is a MUST library to use the camera.
you can learn more from this article:

https://www.topcoder.com/thrive/articles/what-is-the-opencv-libra
ry-and-why-do-we-need-to-know-about-it#:~:text=Capturing%20video%
20using%20OpenCV,really%20useful%20for%20video%20analysis.
‘’’

tello.streamon()

# this command use to enable video stream

frame_read = tello.get_frame_read()

# this command to make a variable to get the frame that will show
the picture capture by Tello’s camera
while True:

img = frame_read.frame
cv2.imshow("drone", img)

# these commands will show the image capture by Tello’s drone to


our screen device

while True:

img = frame_read.frame
cv2.imwrite("drone", img)

# these commands will take a screenshot of the image captured by


Tello’s drone and save it to the same folder as our code’s
folder.

4.1.3 Example of Controlling Camera Sensor Inside The Tello Drone


In this section, we will provide you more about how to control the Tello’s
camera by using an example with the complete code, so you can just copy it and try it
on your own.

You can try this simple example of controlling the camera sensor to show an image to
our screen and press the “p” key to take a screenshot of the image captured by Tello’s
drone.

from djitellopy import Tello


import cv2

tello = Tello()
tello.connect()

tello.streamon()
frame_read = tello.get_frame_read()

count = 0 #CHANGE DEPEND ON NUMBER OF SCREENSHOT IN THE PICTURES


FOLDER #Count the number of screenshot

#tello.takeoff()
#you can take off the drone or not

while True:
img = frame_read.frame
cv2.imshow("drone", img)

key = cv2.waitKey(1) & 0xff

if key == 27: # ESC


break
elif key == ord('p'):
cv2.imwrite("screenshot " + str(count) + ".png", img)
count += 1

tello.end()
#tello.land()
#if you take off the drone, make sure to land it also

For another example, you can go back to the 3.8 section which is Chapter Flying Path
about “Controlling the Drone using Keyboard and Take Pictures using Camera.”

4.2 Infrared Sensor


4.2.1 Definition of Infrared Sensor
An infrared sensor (IR sensor) is a radiation-sensitive optoelectronic
component with a spectral sensitivity in the infrared wavelength range 780 nm - 50
µm. Infrared sensors are commonly used in drones. A thermal camera can hence
detect areas of higher temperatures. It can reveal overheating sections of electrical
equipment in various devices such as switch-gears and substations. A drone can help
to detect these sections from a distance and increase the safety of human personnel.
They can also be used for night vision and surveillance. The infrared sensor in the
Tello drone, helps the drone locate its height from the ground. The infrared sensor on
the Tello drone also helps the drone when it’s landing.

4.2.2 Control the Infrared Sensor


To use and control the infrared sensor, we can use this command
“tello.get_distance_tof()” from the Tello library to get the height of the drone from the
infrared sensors. Therefore, we can know how high our drone is from the ground, so
we can calculate and use the height we know to detect the height of other objects,
avoid other objects, and others.
We can store the height from the infrared sensor, just by making a variable like this:

height = tello.get_distance_tof()

#height = the name of the variable


#height will store the height of the drone from ground by using this method
“tello.get_distance_tof()”

4.2.3 Example of Controlling Infrared Sensor Inside The Tello Drone


In this section, we will provide you more about how to control the Tello’s
infrared sensor by using an example with the complete code, so you can just copy it
and try it on your own. But, this is just a simple example, so you can combine it with
your own code or make this inspiration to make a new one. This is the example of the
code:

# Import necessary libraries


from djitellopy import Tello
import time

# Connect to Tello drone


tello = Tello()
tello.connect()

tello.takeoff()

height = tello.get_distance_tof()

print(height)

# Check for obstacles

if height <= 50:


tello.move_up(20) # Move up if obstacle detected
time.sleep(1) # Wait for the obstacle to pass
else:
tello.move_forward(20) # Move forward if no obstacle detected

# Maintain the same height

if height >= 100:


tello.move_down(20)
elif height <= 80:
tello.move_up(20)

print(height)

tello.land()
tello.end()
4.3 Proximity Sensor
4.3.1 Definition of Proximity Sensor
Proximity sensor is a device that can detect the approach or presence of nearby
objects without physical contact. It has many applications, such as measurement,
gauging the thickness of metal, detecting surface irregularities, and angular speed
measurement. Proximity sensors use different technologies such as ultrasonic,
LiDAR, or infrared to detect obstacles and provide information. For drones,
specifically, a proximity sensor system used to scan the area near the drone and send
that information to the drone so it can avoid unexpected contacts and collision risks.
With the help of proximity sensors, drones will navigate safely in complex
environments such as indoors and outdoors.

4.3.2 Control the Proximity Sensor


The Tello drone uses proximity sensors, mainly to secure a landing area for the
drone. The proximity sensor also makes sure that the landing area is flat. Besides that,
the proximity sensors can also be used to control the altitude and distance of the drone
when flown indoors. To use and control the proximity sensor, we can use the same
command as what we do to control the infrared sensor, which is
“tello.get_distance_tof()” to get the height of the drone measured from the ground.
We can store the height from the proximity sensor, just by making a variable like this:

height = tello.get_distance_tof()

#height = the name of the variable


#height will store the height of the drone from ground by using this method
“tello.get_distance_tof()”

4.3.3 Example of Controlling Proximity Sensor Inside The Tello Drone


In this section, we will provide an example of controlling the proximity sensor
and what the benefit of knowing the height of the drone. This is the example of the
code, so that when we know the height of an obstacle before the drone, and if the
obstacle is higher than the height of the drone from the ground, we can move up the
drone, so there is no crash:

# Import necessary libraries


from djitellopy import Tello
import time

# Connect to Tello drone


tello = Tello()
tello.connect()

tello.takeoff()
# variables

height = 0
newHeight = 0
sameHeight = True

# while loop condition and comparasion between first height and


new height after forward x cm

while sameHeight:

height = tello.get_distance_tof()

print("Height: " + str(height))


time.sleep(0.5)

tello.move_forward(100)

time.sleep(2)

newHeight = tello.get_distance_tof()

print("New Height: " + str(newHeight))

if newHeight == height:
tello.move_forward(100)
else:
tello.move_up(50)
sameHeight = False
break

tello.land()
tello.end()
CHAPTER V

ARTIFICIAL INTELLIGENCE

5.1 Artificial Intelligence


In this section, we will provide a theory, description, and detailed information about
the Artificial Intelligence field, including Machine Learning and Deep Learning. Artificial
Intelligence or otherwise abbreviated as AI, refers to the replication of human intelligence in
devices that have been designed to reason and acquire knowledge similarly to humans. It
involves the creation of algorithms and computer programs that are capable of carrying out
operations that ordinarily need human intellect, such as speech recognition, language
translation, visual perception, and decision-making.
Machine learning is a type of integrated method towards achieving AI that includes
instructing computers to learn from data and enhance their performance over time, with this
being said it refers to the foundation upon which AI is based. In order to enable the machines
to recognize patterns and make predictions based on the data, this is often accomplished by
training models on huge datasets, in which we will be implementing through this module.

5.1.1 Machine Learning


Machine learning is a branch of Artificial Intelligence (AI) and computer
science which focuses on the use of data and algorithms to imitate the way humans
learn to become more accurate when performing certain tasks. Machine learning is
important especially in the field of data science. Through the use of statistical
methods, algorithms can be used in machine learning to make classifications or
predict something in a project.

5.1.2 Deep Learning


Deep learning is a method in Artificial Intelligence (AI) that teaches
computers to process data in a way that is derived from the human brain. Deep
learning models are able to recognize complex patterns in picture, text, sounds, and
other data to produce accurate insights and prediction. It uses multiple layers to
extract higher-level features from the raw input. Some of the common applications of
deep learning are in image colouring, virtual assistant, and chatbots.
5.2 More about Dataset and Machine Learning Model
Back to our original content, which is about Drone Programming Manual Module
using DJI Tello Drone, so far we have learned how to program the drone to move and use
each sensor. However, camera sensors are very useful and we can use them more effectively
by integrating Machine Learning with the camera. So, this is a detailed step on what we
called Machine Learning steps which is to make a dataset and machine learning model:
1. Remember our goal, in this case we want to detect an object by using the DJI Tello
Camera’s drone. So, we should decide what object we want to detect, for example we
want to detect people wearing a mask and this is our goal to create Machine Learning.
2. The next step, we should make or collect a dataset. We can use many platforms like
Roboflow or Kaggle and many others like in the previous chapter also.
3. Then, after we have collected the data, we should prepare the data like cleaning the
data and visualise. After that, we also should choose a model on what output we will
get after running a machine learning on the dataset we used.
4. For the next step, after we have a ready dataset and a nice model, we can train the
model using our dataset. For training the model, we can use platforms like Google
Colab and YOLOv5 to train a model using our custom dataset or using the dataset
they provided.
5. To further the Machine Learning model, after we have trained a model, of course it’s
not always perfect or good enough, so we can evaluate the model. We can evaluate it
by using a better dataset, more clean dataset, and we can also choose a better machine
learning model.
6. Not only evaluate the model, in most cases we can also take a next step in machine
learning called parameter tuning. This is the last step of making a good machine
learning model and we can spend a lot of time on this step, which is parameter tuning.
Parameters in parameter tuning are the variables in the model that every programmer
uses. At a particular value of your parameter, the accuracy will be maximum.

Machine Learning is a very interesting field we can learn and in the next chapter we
will provide many things that can be learned about Machine Learning. We will tell you about
Haar Cascade, make or search dataset using Roboflow and Kaggle, and introduction to
Google Colaboratory, Jupyter Notebook, and YOLOv5. We will also make an object
detection project that we can apply in real life about Machine Learning.
CHAPTER VI

MACHINE LEARNING

6.1 OpenCV Cascade Classifier


6.1.1 Haar Cascade Classifier
Haar cascade is an algorithm known for its simplicity and efficiency within the
system. The algorithm works to detect objects within images and therefore also holds
the ability to detect objects through the Tello drone with the help of a constructed
program integrated into Python through a video stream. Haar Cascades Classifier
detects objects through a training method detecting what should be detected (positive
images) and what shouldn’t be detected (negative images). Initially, with this method
of detection, Haar Cascade works as a classifier, classifying positive data and negative
data. The cascading window is used by Haar cascade, which tries to compute features
in each window and determine whether it might be an object. In this module we will
be using pre-trained Haar Cascades to avoid long processes of custom machine
learning through the OpenCV library.

6.1.2 How to install Haar Cascade


Through a publication within Github, OpenCV overlooks a repository which
contains pre-trained Haar Cascades which can be used to detect; the human face,
human eyes, human features, and also vehicles or other objects. To integrate a simpler
comprehension of the process follow through the instructions below.
1. Open The OpenCV library containing pre-trained Haar Cascade Classifier
through this link:
https://github.com/opencv/opencv/tree/master/data/haarcascades

2. Select a pre-trained Haar Cascade to use

For example in this module, we will select the


haarcascade_frontalface_default.xml file which detects human faces.

3. Download the previously selected classifier as a XML file


4. After downloading the xml file, relocate the file location to the same folder
containing the file in which the program will be constructed

5. To input the Haar Cascade file into your program use the following code to
call the Haar Cascade classifier.
faceCascade = cv2.CascadeClassifier('folder
name/haarcascade_frontalface_default.xml')

For further comprehension and involvement in programs see the examples below.

6.2 Programs with Haar Cascade


6.2.1 Control Face Detection and Movement with Keyboard
The following program implements Haar Cascade Classifier to detect and
track human faces whilst being able to be controlled by the Keyboard. The program is
an example, and therefore to achieve the goal, the user must modify and specify the
code according to the user’s purpose.
In this case, the written line aiming towards ‘folder name’ should be modified
to match the user’s folder name in which the haarcascade.xml file is stored. Note that
the haarcascade.xml file should be in the same folder as the user’s python file in
which the code is constructed.

from djitellopy import Tello


import cv2

tello = Tello()
tello.connect()

print(tello.get_battery())

tello.streamon()
frame_read = tello.get_frame_read()

tello.takeoff()

def findFace(img):
faceCascade = cv2.CascadeClassifier('folder
name/haarcascade_frontalface_default.xml')
imGray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = faceCascade.detectMultiScale(imGray, 1.1, 4)

myFaceListC = []
myFaceListArea = []

for (x, y, w, h) in faces:


cv2.rectangle(img, (x, y), (x + w, y + h), (0, 255, 0),
2)
cx = x + w // 2
cy = y + h // 2
area = w * h
cv2.circle(img,(cx, cy), 5, (0, 255, 0), cv2.FILLED)
myFaceListC.append([cx, cy])
myFaceListArea.append(area)
if len(myFaceListArea) != 0:
i = myFaceListArea.index(max(myFaceListArea))
return img, [myFaceListC[i], myFaceListArea[i]]
else:
return img, [[0,0], 0]

while True:
img = frame_read.frame
cv2.imshow("drone", img)

key = cv2.waitKey(1) & 0xff


if key == 27: #ESC
break
elif key == ord("w"):
tello.move_forward(30)
elif key == ord('s'):
tello.move_back(30)
elif key == ord('a'):
tello.move_left(30)
elif key == ord('d'):
tello.move_right(30)
elif key == ord('e'):
tello.rotate_clockwise(90)
elif key == ord('q'):
tello.rotate_counter_clockwise(90)
elif key == ord('r'):
tello.move_up(30)
elif key == ord('f'):
tello.move_down(30)

img = tello.get_frame_read().frame
img, info = findFace(img)
cv2.imshow("Face Tracking", img)

tello.land()
Control Face Detection and Movement with Keyboard Output:

6.2.2 Face Detection Commanded Movement


In this section, the following program implements Haar Cascade Classifier to
detect and track human faces through a series of commands. The program is an
example, and therefore to achieve the goal, the user must modify and specify the code
according to the user’s purpose.

from djitellopy import Tello


import cv2
import numpy as np

#PARAMETERS
width = 200
height = 100
startCounter = 1

tello = Tello()
tello.connect()

print(tello.get_battery())
tello.streamoff()
tello.streamon()

def findFace(img):
faceCascade = cv2.CascadeClassifier('folder
name/haarcascade_frontalface_default.xml')
imGray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = faceCascade.detectMultiScale(imGray, 1.1, 4)

myFaceListC = []
myFaceListArea = []

for (x, y, w, h) in faces:


cv2.rectangle(img, (x, y), (x + w, y + h), (0, 255, 0),
2)
cx = x + w // 2
cy = y + h // 2
area = w * h
cv2.circle(img,(cx, cy), 5, (0, 255, 0), cv2.FILLED)
myFaceListC.append([cx, cy])
myFaceListArea.append(area)
if len(myFaceListArea) != 0:
i = myFaceListArea.index(max(myFaceListArea))
return img, [myFaceListC[i], myFaceListArea[i]]
else:
return img, [[0,0], 0]

while True:
frame_read = tello.get_frame_read()
myFrame = frame_read.frame
img = cv2.resize(myFrame, (width, height))

if startCounter == 0:
tello.takeoff()
tello.move_forward(20)
startCounter = 1

img = tello.get_frame_read().frame
img, info = findFace(img)
cv2.imshow("Face Tracking", img)
if cv2.waitKey(1) == ord('q'):
tello.land()
break

Face Detection Commanded Movement Output:

6.2.3 Face Tracking & Follow Program


With the development of the Face Tracking program using Haar Cascade
Classifier, the user can now command the drone to follow the user’s detected face
through the drone’s camera. Haar Cascade, although easy to use, is not as efficient.
With this being said, classifying objects through the classifier may be untrue,
detecting other objects that might resemble faces. Although, the following program
works best while tested in an open space, free of countless variables (to see the
program produce the best quality, find a spacious room with static colours). The
following written code is an example, and therefore to achieve the goal, the user must
modify and specify the code according to the user’s purpose.

import cv2
import numpy as np
from djitellopy import tello
import time
tello = tello.Tello()
tello.connect()

tello.takeoff()
tello.streamon()
tello.send_rc_control(0, 0, 25, 0)
time.sleep(0.5)

w, h = 800, 600
fbRange = [6200, 6800]
pid = [0.4, 0.4, 0]
pError = 0

def findFace(img):
faceCascade = cv2.CascadeClassifier('folder
name/haarcascade_frontalface_default.xml')
imGray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = faceCascade.detectMultiScale(imGray, 1.1, 4)

myFaceListC = []
myFaceListArea = []

for (x, y, w, h) in faces:


cv2.rectangle(img, (x, y), (x + w, y + h), (255, 0, 0), 2)
cx = x + w // 2
cy = y + h // 2
area = w * h
font = cv2.FONT_HERSHEY_COMPLEX_SMALL
cv2.putText(img,'Face',(x+w,y+h), font, 1, (255, 0, 0), 1,
cv2.LINE_AA)
cv2.circle(img,(cx, cy), 5, (255, 0, 0), cv2.FILLED)
myFaceListC.append([cx, cy])
myFaceListArea.append(area)
if len(myFaceListArea) != 0:
i = myFaceListArea.index(max(myFaceListArea))
return img, [myFaceListC[i], myFaceListArea[i]]
else:
return img, [[0,0], 0]

def trackFace(info, w, pid, pError):


area = info[1]
x, y = info[0]
fb = 0

error = x - w // 2
speed = pid[0] * error + pid[1] * (error - pError)
speed = int(np.clip(speed, -100, 100))

if area > fbRange[0] and area < fbRange[1]:


fb = 0
elif area > fbRange[1]:
fb = 10
elif area < fbRange[0] and area != 0:
fb = 50

if x == 0:
speed = 0
error = 0

print(speed, fb)

tello.send_rc_control(0, fb, 0, speed)


return error

while True:
img = tello.get_frame_read().frame
img = cv2.resize(img, (w, h))
img, info = findFace(img)
pError = trackFace(info, w, pid, pError)
cv2.imshow("Face Tracking", img)
if cv2.waitKey(1) == ord('q'):
tello.land()
break

6.3 Roboflow
Roboflow empowers developers to build their own computer vision applications, no
matter their skillset or experience. They provide all of the tools needed to convert raw images
into a custom trained computer vision model and deploy it for use in applications. This is the
overview of roboflow from the official website, https://roboflow.com/

6.3.1 Create Custom Dataset Manually


This is just one of the solutions if we want to make a custom dataset to train
our Machine Learning. We can make a custom dataset by using roboflow as our
platform and we will also provide another platform after this, which is using Kaggle.
However, this is the step by step to making a dataset using Roboflow:

1. Go to the official website of roboflow: https://roboflow.com/

2. Sign up or register if you don’t have an account, then sign in to the roboflow.

3. For the first time, you can watch the video tutorial provided by Roboflow on
the website in the Quick Tips section or Resources section.

4. Then, you must create a workspace and name it whatever you want.

5. After you have a workspace, go there and create a new project


6. For the tutorial, you can choose the project type as “Object Detection
(Bounding Box)” and you can detect anything you want, but for this tutorial
we use a mask, and name the project whatever it is. However, for the licence
we just use “public domain.” After that, click the create public project button.

7. You can make your own custom dataset using your own images or videos.
However, you can also search for images that you want to detect and upload it
on the upload section.
8. After you’ve done upload all the images you want to use as data training for
the Machine Learning, then assign all the images on the assign section.

9. After that, go to the annotation section and go annotate all of the things in the
images that you want to detect. Just use the Bounding box tool or just press ‘b’
on the keyboard to use that tool. Then, annotate the thing that you want to
detect, like this (if you want to train the machine learning to detect a mask):

10. After you have done annotating all of the images you want to use, then add the
images to the dataset. (By default the system will divide the images to 3
categories, which is test, train, and valid. We just use the default method of
that dividing).

11. Then you will find this on the annotating section and click the “Generate New
Version”

12. After you click “Generate New Version”, you will be on this page and you can
make a detail before
13. If you already make a detail whatever you want to the data training, just click
generate and wait for the machine learning to learn the dataset we gave.

14. After the loading is done, then you will find this page and just export the
dataset and choose the format as “YOLO v5 PyTorch”, then download zip to
the computer.
15. After the download is completed, congratulations you have made a custom
dataset by yourself. After that, you can use that dataset on the YOLOv5 using
Google Colab Environment in the next section (5.6.3 Google Colab
Environment).

6.3.2 Find and Use a Custom Dataset on Roboflow Universe


On Roboflow, you also can find a dataset created by other people, so you can
create manually or just use a dataset from others. To find other people dataset, you
can go to: https://universe.roboflow.com/

You will find this page and you can explore the website to find the dataset that is
suitable for you and you love the best. After you have found the best dataset, just
download it and learn how to use that on the YOLOv5 Google Colab Environment
Section (5.6.3).

6.3.3 More Things about Roboflow


This is just another thing about Roboflow, which is we will provide a link that
can help you explore Roboflow.

Roboflow Projects: https://app.roboflow.com/ (make sure to login to your account).


Roboflow Universe: https://universe.roboflow.com/
Roboflow Documentation: https://docs.roboflow.com/
Roboflow Forum: https://discuss.roboflow.com/

Remember:
Roboflow is just one platform to make and find a dataset, however, in the next
section, we will provide another platform where you can find more dataset, which is
on https://www.kaggle.com/

6.4 Kaggle
From the official website of Kaggle, this is the overview of what Kaggle is. Kaggle is
a platform that allows users to find datasets they want to use in building AI and Machine
Learning models, publish datasets, work with other data scientists and machine learning
engineers, and enter competitions to solve data science challenges. Therefore, this is the right
place for us to find a dataset, because there are many types of dataset we can get and use on
Kaggle.

6.4.1 Getting Started on Kaggle


On the left bar of Kaggle's main page, we can see that there are many
directories. We can access those directories and use their functions.

This is the guide on what we can find, search, and use on Kaggle:

1. Create: use to make a new notebook, new dataset, or new competition.


2. Home: to see our activity record such as how many dataset, notebook,
competition, discussion, and courses that we have taken or made. There is also
the newest information from Kaggle.
3. Competitions: explore various interesting coding competitions to help us
develop our coding skills in this section
4. Datasets: download specific datasets to create accurate analysis.
5. Models: discover hundreds of trained models that are ready to test.
6. Code: run machine learning code with Kaggle notebooks.
7. Discussions: share your thoughts and questions through this Q&A forum.
8. Learn: find free courses and guides to help you gain a deeper understanding of
coding.
9. Your work: your personal profile.
10. View active events: you can also create a notebook or dataset through this
section.

6.4.2 Find a Dataset on Kaggle


On the Kaggle website, you can find a dataset created by other people or many
teams, so this is an effective way to train an AI and machine learning model.
This is the step by step to find other people dataset and use it:

1. Go to Kaggle official website: https://universe.roboflow.com/


2. For the first time you go to Kaggle, you have to create an account, you can
sign up using Google or any other options.
3. After that you can go to the main page of Kaggle:

4. On the main page, go to the “Datasets” section. There, you can search any
datasets that have been provided by others in the search bar.
5. For the example, we use this datasets, from:
https://www.kaggle.com/datasets/aditya276/face-mask-dataset-yolo-format

6. Remember to download a dataset with YOLO Format and then just click the
download button to download the dataset.

7. After the download is completed, the file will be in your folder and you can go
to the next section, how to use that dataset on Google Colab.

6.5 YOLOv5
6.5.1 What is YOLOv5?
YOLOv5 is the world's most loved vision AI, representing Ultralytics
open-source research into future vision AI methods, incorporating lessons learned and
best practices evolved over thousands of hours of research and development. Besides
that, YOLOv5 is a family of compound-scaled object detection models trained on the
COCO dataset, and includes simple functionality for Test Time Augmentation (TTA),
model ensembling, hyperparameter evolution, and export to ONNX, CoreML and
TFLite. To make it easier to understand, we can call YOLOv5 as a library and helper
we can use to train our dataset, so the machine can understand what we want to detect
by using machine learning.

6.5.2 How to install YOLOv5

1. Go to the Github YOLOv5: https://github.com/ultralytics/yolov5

2. There, you can find a quick start and guide about YoloV5, you can read it carefully, so
that you know well about what YOLOV5 is YOLOv5.

3. First thing you should download ZIP of the Github and store in your own folder
4. After that, you should install all packages that you need to use YOLOv5 according to
the documentation below:

5. Install all packages in the requirements.txt, using this command:

To install the packages: pip install -r requirements.txt

6. After you have already set up all of the requirements that needed to use YOLOv5, you
are ready to go and you can go to the next section, which is use the YOLOV5 on the
Google Colab Environment.
7. This is another guide to know more about YOLOv5 and also a simple manual on how
to use YOLOv5 in another way:
PyTorch: https://pytorch.org/hub/ultralytics_yolov5/
Ultralytics: https://ultralytics.com/yolov5

Thing to remember, this is just one way of many other ways that can be used for
training and using Machine Learning. You can also explore YOLOv5 by yourself or
other platforms and libraries for well learned about machine learning and AI.

6.6 Google Colaboratory


6.6.1 What is Google Colab?
Google Colaboratory is a web IDE for python that was released by Google in
2017. It is a free Jupyter notebook environment that runs entirely in the cloud. Colab
is a brilliant tool for data scientists to execute Machine learning and deep learning
projects with cloud storage capabilities. Colab also provides its users free access to
high compute resources such as GPUs and TPUs that are important to training models
faster.

6.6.2 How to use YOLOv5 on Google Colab as Environment


Important! You can go on this section, if you already downloaded and have a
simple dataset from Roboflow or Kaggle it is up to you. For this tutorial, we are using
this dataset, if you want to follow us more effectively, you can also use this dataset,
just click the link and download the dataset.
Source of dataset:
https://universe.roboflow.com/inteligncia-computacional-ufpa/masked-people

As we mentioned before about Google Colab, so right now we will be using Google
Colaboratory as an environment for YOLOv5. This is the step by step you can follow
to use Google Colab:

1. Firstly, you can go to YOLOv5 GitHub: https://github.com/ultralytics/yolov5


and scroll down to this page, and then click to the Google Colab logo
However, to make this easier, we can just go to the YOLOv5 Environment on
Google Colab by clicking this link:
https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutoria
l.ipynb

2. After you go to the Google Colab for YOLOv5, you will be on this page

3. After that, you should connect first to the Google Colab by pressing connect at
the top of the page
4. Then, we can set up the notebook by going to the “Edit” menu, then go to the
“Notebook settings”, and after that select the hardware accelerator as “GPU”,
then just click save.

5. After that, you can go to the setup section on the Google Colab (YOLOv5)
and run the code to clone the GitHub Repository.
(Important: you must already download all packages needed for YOLOv5
from the YOLOv5 section).
Then, you can run the setup code and clone the GitHub.

6. After it’s completed, you can go to the files menu at the left section and the
YOLO master file will appear on our Google Colab Environment like this:

7. Then, after we successfully clone the YOLOv5 GitHub repository, we can


upload our dataset that we already have by drag and drop to the files section
(just upload the ZIP folder and we can extract it after that. This maybe take a
quite long time)

8. After the upload completed, we will find our folder in the Google Colab like
this:
9. Then, you can scroll down in the Google Colab Environment to the “Train”
section, right here:

10. On there, we can extract our dataset ZIP folder, by using this command:

Original command: !unzip -q ../”ZIP FOLDER NAME.zip” -d ../


Command: !unzip -q ../MaskDataSet.zip -d ../

11. After we extract all the files, we can refresh the files section and we can find
our ZIP folder has been extracted.
12. Then, we still in the “Train” section and we can change the available code like
the image below:

We change the number of epochs to 100 and change the file name of
“coco128.yaml” to “/content/data.yaml” which is the path of our
“data.yaml” file. You can copy the path of the “data.yaml” file in the files
section, if you have already extracted the ZIP, it should be there.

Notes:
An epoch is when all the training data is used at once and is defined as the
total number of iterations of all the training data in one cycle for training the
machine learning model.
13. After your code is like the image above, you can run that code

After that, just wait until the code finishes running (it will take a few minutes
or long enough).

14. After the code is finished running, then we can go to our files section
Then open the “yolov5” folder like the image below:
../yolov5/runs/train/exp/weights/best.pt (download the “best.pt” file)

Another meaning:
Open yolov5 → open runs folder → open train folder → open exp folder →
open weights folder → download the “best.pt” file

Download the best.pt file

15. “best.pt” is our machine learning model file that has been learned from our
dataset and we will use that file to make a program that detects something we
already put to the machine to learn, which is to detect people wearing a mask.

6.7 Jupyter Notebook


Jupyter Notebook is a web application for creating and sharing computational
documents. Jupyter notebook has two components: a front-end web page and a back-end
kernel. The front page allows data scientists to input programming code in text boxes. Then,
the back-end will run the code and return the results. One of the benefits of using Jupyter
Notebook is we can run the program box by box easier to convert to other formats such as
HTML and PDF. This is a step by step instruction that you can follow to install and use
Jupyter Notebook on Visual Studio Code:

1. Open Visual Studio Code and go to the extensions panel on the left bar. Search
“Jupyter”
2. Click the “install” button to install the extension.

3. Now we are ready to launch the Jupyter Notebook. Go to the command palette by
pressing Ctrl + Shift + P on a windows or Cmd + Shift + P on a mac. After that type
“jupyter notebook”. Choose “Create: New Jupyter Notebook”.

However, you also can just make a new file on VSCode, but give the file name
extension as “.ipynb” This extension file will automatically create a Jupyter Notebook
file, like this:

4. Select kernel on the top right. Make sure you set it to python
5. Everything is already set up. If you want to add a new code box, simply click on the
“+ Code” button on the left. Besides that, by using Jupyter Notebook, you also can
add a text box, by clicking on the “+ Markdown.”

6. One of the special features of the Jupyter Notebook is you can execute your code box
by box. To execute a specific box, click the run button on the left side of the box. You
can also run all programs at once by pressing the “Run All” button.

6.8 DJI Tello Mask Detection Program


Wow, you have learned many things so far and in this section you will use everything
that you have already learned about machine learning in making a detection program.
However, you must have already downloaded the “best.pt” file as what we mentioned before
that you are ready to go. This is a step by step tutorial to making a DJI Tello Mask Detection
Program using our (best.pt) file and YOLOv5:

1. Open your Visual Studio Code and make a new file, you can name it anything,
however use a Jupyter Notebook extension as what you have learned before like this:
2. Remember that the Jupyter Notebook file extension is “.ipynb” and you also should
put the “best.pt” file in the same folder as our Jupyter Notebook file.

3. As we know, we can divide our code in our Notebook, so we are making 2 parts of
code, the first part, we can run by using WIFI and the second part, we run it by
connecting to the DJI Tello drone first.

4. The first part of the code (Connect to WIFI):

# import libraries

import cv2
import torch
import numpy as np
from djitellopy import Tello
import time

Not just libraries, we also call the YOLOv5 and our best.pt model, so that we can use
it as our model to learn and detect people wearing a mask.

# external packages

path = "C:/Users/User/Documents/All About Code/Python/UPH FK Ilmu


Komputer/YOLOv5/best.pt"
model = torch.hub.load("ultralytics/yolov5", "custom", path,
force_reload=True)
The path variable: copy your best.pt path in your folder and paste it in the path
variable like the image above.
The model variable: you can just write the code like that, just copy it.

Remember: the “path” variable is according to where you store your best.pt file in
your own folder, so don’t copy it, but change it

So this is the first part of our code (Connect to WIFI):


You can copy the codes above or just write this by your own

Notes:
We run the import libraries first, then the external “packages” we should use which is
the YOLOv5 and our best.pt file

5. The second part of the code (Connect to DJI Tello Drone):

# method to detect mask

def mask_detect(frame):

mask_count = 0
results = model(frame)
frame = np.squeeze(results.render())
labels, cord = results.xyxyn[0][:, -1], results.xyxyn[0][:,
:-1]
n = len(labels)
x_shape, y_shape = frame.shape[1], frame.shape[0]
for i in range(n):
row = cord[i]
if row[4] >= 0.30:
x1, y1, x2, y2 = int(row[0] * x_shape), int(row[1] *
y_shape), int(row[2] * x_shape), int(row[3] * y_shape)
mask_count += 1

return mask_count, frame

This is the first part of the second part of the code, which is making the method to
detect a mask. You can just copy it to your Jupyter Notebook in VSCode.

# DJI Tello code

tello = Tello()
tello.connect()

print(tello.get_battery())
tello.streamon()

# shows frame
while True:

frame = tello.get_frame_read().frame

mask_count, frame = mask_detect(frame)


font = cv2.FONT_HERSHEY_COMPLEX_SMALL
# cv2.putText(frame,"Mask Count: " + str(mask_count), (10,
50), font, 1, (255, 0, 0), 3)
cv2.imshow("Mask Detection", frame)
time.sleep(0.05)
cv2.waitKey(1)

This is the second part of the second part of the code, which is the DJI Tello code and
shows the frame on our screen

So, this is the second part of the code, which is we must connect to the DJI Tello
Drone first:

You can copy the codes above or just write this by your own
6. Finally, this is the complete code, however you should run it part by part, the first
part run it by using WIFI and second part run it by connecting to the DJI Tello Drone:

# FIRST PART (CONNECT TO WIFI)

# import libraries

import cv2
import torch
import numpy as np
from djitellopy import Tello
import time

# external packages

path = "C:/Users/User/Documents/All About Code/Python/UPH FK Ilmu


Komputer/YOLOv5/best.pt"
model = torch.hub.load("ultralytics/yolov5", "custom", path,
force_reload=True)

# method to detect mask

def mask_detect(frame):

mask_count = 0
results = model(frame)
frame = np.squeeze(results.render())
labels, cord = results.xyxyn[0][:, -1], results.xyxyn[0][:,
:-1]
n = len(labels)
x_shape, y_shape = frame.shape[1], frame.shape[0]
for i in range(n):
row = cord[i]
if row[4] >= 0.30:
x1, y1, x2, y2 = int(row[0] * x_shape), int(row[1] *
y_shape), int(row[2] * x_shape), int(row[3] * y_shape)
mask_count += 1

return mask_count, frame


'''SECOND PART (CONNECT TO DJI TELLO DRONE)'''

# DJI Tello code

tello = Tello()
tello.connect()

print(tello.get_battery())
tello.streamon()

# shows frame

while True:

frame = tello.get_frame_read().frame
frame = cv2.resize(frame, (480, 360))
mask_count, frame = mask_detect(frame)
font = cv2.FONT_HERSHEY_COMPLEX_SMALL
cv2.putText(frame,"Mask Count: " + str(mask_count), (10, 50),
font, 1, (255, 0, 0), 3)
cv2.imshow("Mask Detection", frame)
time.sleep(0.05)
cv2.waitKey(1)

Notes:
Run it part by part by using Jupyter Notebook

7. Run the code and Documentation:

Running the first part of the code while connected to the WIFI:
Running the second part of the code while connected to the DJI Tello Drone:
After you run all the parts of the code correctly, then you will find the frame screen
appear in your windows and just open it.

This is what the frame shows:


Congratulations, you have made a simple machine learning to detect an object!

8. To close the frame window, you should stop the last code from running, just click the
part of the last code, which is the DJI Tello Drone and shows the frame.

After that, close the frame windows and close the program like this:

Click “Close the program.”

6.9 Object Detection Using YOLOv5


Besides making a custom dataset using Roboflow or Kaggle and training it on Google
Colab, YOLOv5 has their own dataset by using COCO dataset. In YOLOv5 we can use many
type of dataset according to what we want, like this image below:

This graphic shows the accuracy and efficiency using the YOLOv5 Machine Learning Model
and this is the notes about the details:
➢ COCO AP val denotes mAP@0.5:0.95 metric measured on the 5000-image COCO
val2017 dataset over various inference sizes from 256 to 1536.
➢ GPU Speed measures average inference time per image on COCO val2017 dataset
using a AWS p3.2xlarge V100 instance at batch-size 32.
➢ EfficientDet data from google/automl at batch size 8.

This table shows more detailed information for each YOLOv5 model that we can use.
So, if we don’t want to make a custom dataset and just want to try simple machine
learning. We can use the YOLOv5 model and choose what model we want to use. However,
if we use the YOLOv5 model, we can’t detect a thing that we want, our camera will detect
any object that the model has been training for. For example, the YOLOv5 model will detect
bottle, person, tie, mouse, etc and all of the things that the model has learned will be detected
by our DJI Tello drone’s camera. For more details you can try this code by your own, which
is detect objects using YOLOv5 model and for this example we use “yolov5x” model:

# CONNECT TO WIFI FIRST!

# import libraries

import cv2
import torch
import numpy as np
from djitellopy import Tello
import time

# YOLOv5 Packages

model = torch.hub.load("ultralytics/yolov5","yolov5x")

# method to detect mask

def object_detect(frame):

object_count = 0
results = model(frame)
frame = np.squeeze(results.render())
labels, cord = results.xyxyn[0][:, -1], results.xyxyn[0][:, :-1]
n = len(labels)
x_shape, y_shape = frame.shape[1], frame.shape[0]
for i in range(n):
row = cord[i]
if row[4] >= 0.30:
x1, y1, x2, y2 = int(row[0] * x_shape), int(row[1] *
y_shape), int(row[2] * x_shape), int(row[3] * y_shape)
object_count += 1
return object_count, frame

# CONNECT TO THE DJI TELLO DRONE’S WIFI!

# DJI Tello code

tello = Tello()
tello.connect()

print(tello.get_battery())
tello.streamon()

# shows frame

while True:

frame = tello.get_frame_read().frame

object_count, frame = object_detect(frame)


font = cv2.FONT_HERSHEY_COMPLEX_SMALL
cv2.putText(frame,"Object Count: " + str(object_count), (10, 50),
font, 1, (255, 0, 0), 3)
cv2.imshow("Object Detection", frame)
time.sleep(0.05)
cv2.waitKey(1)

Actually this code is very similar to the previous code, which is mask detection. But, in this
case we didn’t use a custom dataset but we used the YOLOv5 model provided from the
COCO dataset. So, the major difference between this code with the mask detection code are:
➢ We didn’t make and use custom data, but we use the YOLOv5 model.
➢ We cannot detect things that we want, but our camera will detect anything that the
YOLOv5 has been training for.
➢ In the code, we just change the external packages section, which is if we use a custom
dataset, we need to put our path of the “best.pt” file. However, if we use the YOLOv5
model, we don't need a custom path, because we just connect it to the YOLOv5
model.
This is what the code looks like if we run it (Object Detection Using YOLOv5):

So this is the output difference between using custom dataset and COCO dataset. However,
this is the end of the Machine Learning Chapter, but you can learn more in the next chapter,
so just read it more. Congratulations on your improvement skill on drone programming so
far!
CHAPTER VII

SIMULATION PROJECT

7.1 Simulation for Search and Rescue Project


Welcome to the final chapter of the Drone Programming Manual Module! In this
chapter, we have an exciting project to share with you: the Search and Rescue Simulation
Project. This project is based on real-life scenarios, where drones can be incredibly helpful in
finding and rescuing people in need. Because, we want to emphasise that drone programming
is not just a theoretical concept but can be applied in practical, real-life situations. It's
amazing how we can take the ideas we learn here and use them to make a positive impact in
the world.
So far, as you can see, there are endless possibilities with drone programming. So, we
encourage you to let your creativity soar and explore different ideas and projects with your
DJI Tello Drone. You can even come up with your own unique projects. Therefore, this
Search and Rescue Simulation Project is just one example among many. So, have fun, be
curious, and keep pushing the boundaries of what you can achieve with drone programming.

7.1.1 Scenario for the Simulation Project


The simulation is derived from a simplified and smaller-scale version of an
autonomous SAR (Search and Rescue) operation. To begin, a designated launch area
will be established as the starting point for the drone's mission. The drone will take off
from this location and commence its search. Its primary objective will be to locate a
ball, which symbolises a person in a real-life scenario.
Once the drone identifies the ball, it will proceed to approach it and land
approximately 1 metre behind the target. Following a brief period, the drone will take
off once more, resuming its search for another ball that requires rescue.
This simulation serves as a practical exercise to test the drone's autonomous
search and landing capabilities in a controlled environment. By repeating the process,
the drone can refine its performance and enhance its effectiveness in real-life SAR
missions.

7.2.1 DJI Tello Drone’s Program for Search and Rescue Simulation Project
Developing a program for the drone to autonomously search, identify, and
land on objects is a more intricate task compared to what we have accomplished so
far. It necessitates us to comprehend application of the knowledge and skills we have
acquired throughout this module. We must combine our code we have learnt, ranging
from fundamental functionalities such as takeoff, to more sophisticated processes like
object recognition and precise manoeuvring.
To know how to code for the simulation project, we should know what the
flying path and the simulation based on real life look like. So that, this is the scenario
picture of Search and Rescue Program:

In this Search and Rescue Project, we will use a ball that represents a human being in
real life. So, this is the step by step of our project will look like:
1. We will make a custom dataset about balls that we will use in the project
2. We will train a machine learning model using YOLOv5 with Google Colab
Environment to learn our ball custom dataset, so that the DJI Tello Drone will
detect the ball using the camera’s sensor.
3. We will code the drone to detect the ball that represents humans in this case
and will go forward the ball, then landing to represent that the drone fetch the
human and then take off again to Search and Rescue all the balls (human).
4. Finally after the drone has gone and picked up all the balls (human), the drone
will go back to the first place where the drone has taken off to represent that it
is the safe place.

So, this is what we will do and code about the Search and Rescue Simulation Project
and let’s make it:

1. First step, make a custom dataset about balls. In this case, we will use Roboflow as
our platform to make the custom dataset.

First, we take a 2 - 3 minutes video to record the ball that we will use in the
simulation project and then upload the video to the Roboflow object detection project
with 5 - 6 frames per second.

Choose the frame rate from the video


Then, click “save and continue.”

After we have enough images to make a custom dataset, then we annotate the ball in
all images that we use.

After we’ve done annotating all the images, then assigning all the images to the
dataset and then still in the annotate section, click “Generate new version.”
Clik, “Generate New Version.” and you will be in this page shown below:

In the augmentation section, add rotation and blur to the augmentation step and then
just generate it.
Generate with the maximum version size for free (3x from total images), then click
Generate.

After that, export the dataset with YOLOv5 PyTorch format and download the ZIP to
the computer.

Congratulations, on the first step to make a custom dataset with Roboflow!

2. Train a machine learning model using YOLOv5 with Google Colab Environment

Actually, this step is the same as what we explain and give as an example in the
previous chapter, about Machine Learning. So that, you can go back and learn how to
use YOLOv5 to train a machine learning model with Google Colab as Environment
on Chapter 6 (6.6.2). However, surely the ZIP file and the custom dataset is different
from the example, because in this case we will use a custom dataset that we have been
exporting from Roboflow that will detect a ball. But, for the step by step, you can
follow from the “6.6.2 lesson”

3. Finally, this is the final step of our Simulation Project which is to code the DJI Tello
Drone to follow the Search and Rescue Program as what we wanted before. However,
in this part we will not make the code the same as what we wanted before, because we
will give you a challenge to make the code on your own. But, we will give a basic
example that approaches the Search and Rescue Program. The code:


BIBLIOGRAPHY

Banoula, M. (2023, February 16). Machine Learning Steps: A Complete Guide!.

https://www.simplilearn.com/tutorials/machine-learning-tutorial/machine-learning-ste

ps

Damiafuentes. (2023, April 7). damiafuentes/DJITelloPy: DJI Tello drone python interface

using the official Tello SDK. Feel free to contribute! GitHub.

https://github.com/damiafuentes/DJITelloPy

de Silva, C. W. (2003). Sensors for Control. Encyclopedia of Physical Science and

Technology, 609–650. https://doi.org/10.1016/b0-12-227410-5/00139-3

DJITelloPy API Reference. (2023). Readthedocs.io. https://djitellopy.readthedocs.io/en/latest/

Gupta, S. (2022). Is ai hard to learn? A guide to getting started in 2023 - springboard.

https://www.springboard.com/blog/data-science/is-ai-hard-to-learn/

IBM. (2022). What is machine learning? https://www.ibm.com/topics/machine-learning

Jocher, G. (2020). YOLOv5 by Ultralytics (Version 7.0) [Computer software].

https://doi.org/10.5281/zenodo.3908559

Ryze R. (2018). SDK 2.0 User Guide 2.

https://dl-cdn.ryzerobotics.com/downloads/Tello/Tello%20SDK%202.0%20User%20

Guide.pdf

Singh, P. (2022.). How to use Google Colab for Machine Learning Projects.

https://www.shiksha.com/online-courses/articles/how-to-use-google-colab-for-machin

e-learning-projects/
Coursera. (2023). What Is Python Used For? A Beginner’s Guide.

https://www.coursera.org/articles/what-is-python-used-for-a-beginners-guide-to-using

-python
ATTACHMENT

Besides the manual module that we had provided, in this attachment we also provide the

GitHub repository link, so you can easily clone the module and every code in it, and use it on

your own.

Github Repository Link:

https://github.com/StyNW7/UPH_DJITello.git

Anda mungkin juga menyukai