Anda di halaman 1dari 130

SLAM BASED CARGO DETECTING FORKLIFT

BY

ANUM IMTIAZ 2013-EE-003


ARUBA SHAHID 2013-EE-012
SYED ASHAR ALAM 2013-EE-015
FATIMA SHOAIB 2013-EE-023
SAQIB AKHTER 2013-EE-038

Report submitted in partial fulfillment of the requirements

for the degree

of Bachelor of Science

in Electronic Engineering

DEPARTMENT OF ELECTRONIC ENGINEERING

SIR SYED UNIVERSITY OF ENGINEERING AND TECHNOLOGY, KARACHI

JANUARY 2017
DECLARATION

I hereby declare that this project report entitled SLAM BASED CARGO DETECTING

FORKLIFT is an original work carried out by AnumImtiaz, Aruba Shahid, Syed AsharAlam,

Fatima Shoaib, Saqib Akhter in partial fulfillment for the award of degree of Bachelor of Science

in Electronic Engineering of Electronic Engineering Department, Sir Syed University of

Engineering and Technology, Karachi, Pakistan during the year 2013 to 2014. The Project report

has been approved as it satisfies the academic requirements in respect of project work prescribed

for Bachelor of Science in Electronic Engineering.

I also declare that it has not been previously and concurrently submitted for any other degree or

other institutions.

Signature: ____________________________

Internal Advisor: _______________________

Date: _____________________________

ii
ACKNOWLEDGEMENT

First of all we would like to thank ALMIGHTY ALLAH without which nothing would have

been possible to accomplish. His willing makes everything possible and we are able to complete

this Final Year Project entitled SLAM BASED CARGO DETECTING FORKLIFT.

On this occasion, deepest thanks and appreciation to our family members for their prayers,

cooperation, encouragement, care and constructive suggestion from the beginning till the end.

Without their support we would have not been able to complete this project on time. We dedicate

this work to them who unconditionally provided the resources to us.

We would like to express our heartfelt gratitude to our internal advisor Sir Noman Mehmood for

giving us strength, knowledge and sincere advice throughout the project. Due to his invaluable

guidance and generous support this project turns out to be a successful one.

We would like to thanks all our teachers for sharing their knowledge and skills with us

throughout these four years. Furthermore, thanks to the lab staff, our batch mates, friends and

everyone who helped us out and motivated us in difficult situations.

Finally, we would like to thank Sir Syed University of Engg. And Technology for giving us

platform to show our skills and polish them, converting our knowledge into a valuable product.

ANUM IMTIAZ 2013-EE-003


ARUBA SHAHID 2013-EE-012
SYED ASHAR ALAM 2013-EE-015
FATIMA SHOAIB 2013-EE-023
SAQIB AKHTER 2013-EE-038

January 2017

iii
TABLE OF CONTENTS

Page
DECLARATION...................................ii

ACKNOWLEDGEMENT....iii

TABLE OF CONTENTS..................................iv

LIST OF FIGURES.vii

ABSTRACT..ix

CHAPTER ONE: INTRODUCTION1

1.0 Introduction.....1

1.1 Robot...1
1.2 Cartesian coordinate robot...1
1.3 Course of action...2
1.4 Background and overview...2
1.5 Application of Project......3
1.6 Conclusion...3

CHAPTER TWO: THEORETICAL BACKGROUND & REVIEW OF LITERATURE...4

2.0 Introduction.....4
2.1 SLAM..4
2.1.1 History of SLAM..5
2.1.2 Implementation of SLAM.....5
2.2 Forklift.....5
2.2.1 History of Forklift.....6

CHAPTER THREE: DETAILS OF THE DESIGN.....7

3.0 Introduction.....7
3.1 Circuit Diagram...8
3.2 Mechanical Structure.......8
3.2.1 Main Assembly.10
3.2.2 Dimensions...11

iv
CHAPTER FOUR: SYSTEM HARDWARE..12

4.0 Design Applications.12


4.1 Motors..13
4.1.1 Permanent magnet dc motor.......13
4.1.2 4 QUADRANT operation of machine14
4.1.3 Separately excited dc motor15
4.1.4 Hydraulic motor..16
4.2 Hydraulic gear motor...17
4.2.1 Working..18
4.3 Motor Driver18
4.3.1 H-bridges18
4.4 Raspberry pi 3..19
4.5 Battery..21
4.6 Sensors.22
4.6.1 Ultrasonic Sensor..........22
4.7 Camera.25

CHAPTER FIVE: SYSTEM SOFTWARE........27

5.0 Introduction..27
5.1 Python..27
5.1.1 Whats New in Python .30
5.1.2 Features.30
5.2 Why python31
5.2.1 Readability..........31
5.2.2 Libraries..32
5.2.3 Community.......32
5.3 Forward Motion Planning and mapgeneration...33
5.3.1 Monocular Slam....33
5.3.2 Grid Slam..33
5.4 PygameWindow.....40
5.5 Opencv.....41

5.5.1 Introduction ......41


5.5.1 Key Features.42

v
5.5.2 Why Open Cv? ......43
5.5.3 Image processing.......43
5.6 K-Nearest Neighbors Algorithm..45
5.6.1 Classification with K-Nearest Neighbors..............45
5.6.1 K-Nearest Neighbors Algorithm overview........50
5.7 Optical Character Recognition...54
5.7.1 Introduction...54
5.7.2 Steps with images..54
5.8 Flowchart Representation of OCR..59
5.9 Object Detection.60
5.10 Reverse MotionPlanning.61
5.11 Wi-Fi Communication62
5.11.1 Introduction....62
5.11.2 How Wi-Fi works? ....62
5.11.3 IEEE 802.11 standard63
5.11.4 Interference....63
5.11.5 Ad hoc communication..64

CHAPTER SIX:RESULT AND DISCUSSIONS65

6.0 Technical Problem faced during the project..65

6.1 Result.66

CHAPTER SEVEN:CONCLUSION AND FUTURE ENHANCEMENTS.67

7.0 Conclusion67

REFERENCES...68
GLOSSARY....69
APPENDIX A72
Coding.....72
APPENDIX B...116
Cost Analysis. ...........116
Gantt Chart.117
DATASHEETS.118
TURNITIN REPORT..121

vi
LIST OF FIGURES

Page

1.1 Cartesian coordinate Robot ...................................................................................................1

3.1 Block Diagram of the Proposed Model.................................................................................7

3.2 Circuit Diagram.........................................................................................................................8

3.3 Mechanical Structure from Front...............................................................................................8

3.4 Mechanical Structure from Back...............................................................................................9

3.5 Mechanical Structure from Side................................................................................................9

3.6 Main Assembly of Hardware...................................................................................................10

3.7 Dimensions of Mechanical Structure.......................................................................................11

4.1 Permanent Magnet Dc Motor...........................................................................................13

4.2 Four Quadrant Operations Of Drives.......................................................................................14

4.3 Separately Excited Dc Motor ..................................................................................16

4.4 Working of Hydraulics .......................................................................................................17

4.5 Basic Gear Alignment..........................................................................................................17

4.6 H-Bridge Power Mosfet.......................................................................................................19

4.7 Relay Base H-Bridge ..........................................................................................................19

4.8 Raspberry Pi 3..........................................................................................................................20

4.9 Lido Battery.............................................................................................................................21

4.10 Relationship between Actual and Calculated Distances........................................................24

4.11 Ultrasonic Sensor (Hc-Sr04)..................................................................................................25

4.12 2.1 Megapixel Camera...........................................................................................................26

5.1 Pygame Window Map Generation...........................................................................................35

vii
5.9 Complete Algorithm For Map Generation...............................................................................40

5.10 Clustering Data Points...........................................................................................................46

5.11 Two Groups of Data Points....................................................................................................46

5.12 Two Closest Points to 'X' And 'Y' When K=2.......................................................................47

5.13 Two Closest Points to 'Z' When K =2....................................................................................48

5.14 Three Closest Points to 'Z' When K=3...................................................................................48

5.15 KNN Algorithm Overview....................................................................................................53

5.16a Original Image.....................................................................................................................54

5.16b Gray Scale Image.................................................................................................................54

5.17 Thresholded Image.................................................................................................................55

5.18 Image Contours......................................................................................................................56

5.19a Possible Characters in Scene................................................................................................57

5.19b List of Matching Characters in Scene..................................................................................57

5.11c List Of Possible Name Plates on the Cargo.........................................................................57

5.19d Cropped Image From The Original Image...........................................................................58

5.19e Threshold Image...................................................................................................................58

5.19f Recognize Characters...........................................................................................................58

5.20 Possible Plate On The Cargo.................................................................................................58

5.21 Flowchart Representation Of Ocr..........................................................................................59

5.22 Flow Chart Of Image Comparison.........................................................................................60

viii
ABSTRACT
Our motivation is to create an autonomous path planning forklift robot for industrial use by using

different navigation techniques and technologies. It has a wide range of applications, such as

construction, manufacturing, waste management, space exploration. In order to achieve

successful autonomous forklift robots, is path planning by using different techniques of

algorithm. Path planning in robotics is defined as navigation that shall be collision free and most

optimum for the autonomous vehicle to maneuver from a source to its destination.

The project is synergism and implementation of four techniques SLAM, image processing, OCR

and WIFI communication system. The project is implemented by using ultrasonic sensors,

webcam, hydraulic lift, window motors, Raspberry pi 3 B and battery.

Simultaneous Localization and Mapping (SLAM) is a process by which a robot can build a map

of an unknown environment and at the same time determine its location within this map. This is

useful when normal positioning systems.

Thus, the first goal of the project is to investigate how well SLAM, using ultrasonic sensors and

secondly picking up the object from one place and placing it to its desired location autonomously

through image processing and OCR (optical character recognition). The use of only these types

of measurements results in a solution where the estimation of the position and heading of the

robot is obtained. A substantial deployment would represent a major step forward in the

development of autonomous vehicle systems. The implementation and deployment of a large-

scale SLAM system, capable of vehicle localization and map building over large areas, and

moving an object from one location to another not only help as means of a solution to the map

management problem but also perform the task of locating the object to the desired position

autonomously replacing the manual labor.

ix
Chapter 1 Introduction

CHAPTER ONE

INTRODUCTION

1.0 INTRODUCTION

For the past century there has been a lot of buzz about artificially smart machines i.e. machine

that could learn that is navigate through unknown paths , create vocal responses of different

questions , respond to cognitive and repetitive tasks hence eliminating the human imputes. The

last century gave us the industrial revolution it is safe to say today is the dawn of smart

revolution.

1.1 ROBOT

A robot is an autonomous system which exists in a physical world, can sense its environment,

and can act on it to achieve some goals. Industrial robot serves as a general purpose unskilled or

semiskilled laborer. There are 5 types of industrial robot our robot is Cartesian coordinate robot.

1.2 CARTESIAN COORDINATE ROBOT

A Cartesian coordinate robot has three linear axes of control (x, y, z). Cartesian coordinate robots

with the horizontal members supported at both ends are sometimes called Gantry robots and can

be quite large in size.

Fig 1.1 Cartesian coordinate robot

1
Chapter 1 Introduction

1.3 COURSE OF ACTION

We have to create an autonomous land vehicle which can generate map of its path by using

ultrasonic sensors. Camera is used to detect the desired object by using image processing

technique i.e. image similarity algorithm and an optical character recognition algorithm to read

alpha numeric characters. Then hydraulic lift is used to lift the object and placed back it to its

initial position by using reverse motion technique. The path of movement and 2d coordinates

will be wirelessly transmitted using Wi-Fi communication. 2D map is shown on the pygame

window.

1.4 BACKGROUND AND OVERVIEW

Globally intense competition is underway to implement autonomous vehicles by eliminating

human element in cars.

Therefore as Pakistan in the coming years will experience a surge in dry port facilities due to

the implantation of China Pakistan economic corridor.

It is an overriding concern, if these facilities are not modernized may have financial

repercussions.

In this age majority of the population is fixated towards acquiring of data.

The course of implementation involves the visual access to data that is a map showing path

of movement.

The question arises that how to create an autonomous ground vehicle that maps area as well

as identifies objects and returning to initial position.

We make our task more challenging the return path will be implemented using our own

algorithm.

2
Chapter 1 Introduction

1.5 APPLICATION OF PROJECT

SLAM is central to a range of indoor, outdoor, in-air and underwater applications for both

manned and autonomous vehicles. This project is not only beneficial for the large industries

moving heavy accoutrements, rigging, baggage, equipment etc, but also its applications can vary

on small scale such as home etc.

1.6 CONCLUSION

The end result of our project is:

An autonomous SLAM (Simultaneous Localization and Map building) vehicle. It is

possible for an autonomous vehicle to start in an unknown location in an unknown

environment and, using relative observations only, incrementally build a perfect map of

the environment.

To detect an object through webcam.

To lift that object through Hydraulic lifter and then placed that object to desire place.

3
Chapter 2 Theoretical Background & Review Of Literature

CHAPTER TWO

THEORETICAL BACKGROUND & REVIEW OF LITERATURE

2.0 INTRODUCTION

For every project, there are always some inspirations and requirements which motivate to make

the project so that it must fulfill what one had thought. Similarly, for us it is the industrial

revolution. For the past century there has been a lot of buzz about artificially smart machines I.e.

machine that could learn that is navigate through unknown paths , create vocal responses of

different questions , respond to cognitive and repetitive tasks hence eliminating the human

imputes.

Our project is SLAM based cargo detecting forklift which means we have to perform three major

tasks to achieve our goal that is:

SLAM (Simultaneous Localization and Mapping): Generating a map of an unknown

environment.

Cargo detection: Detecting the desired object by performing image processing and OCR

(optical character recognition).

Forklift: Hydraulic lifter is used to lift the object and placed back it to its position.

2.1 SLAM (Simultaneous Localization and Mapping)

SLAM is used for robot mapping. It is a problem to keep track of robot location within it while

creating or updating a map of an unknown environment. There are several algorithms to solve

this problem include the particle filter and extended Kalman filter.

4
Chapter 2 Theoretical Background & Review Of Literature

2.1.1 HISTORY OF SLAM

In 1986 research on the approximation of 3-D uncertainty by R.C. Smith and P. Cheeseman

cause an inspirational work in SLAM.Other revolutionary work was conducted by the research

group of Hugh F. Durrant-Whyte in the early 1990s. This showed that there are infinite solutions

are available for SLAM which motivates to search different algorithms.

In 1999 by Austrian inventor named Erich Bieramperl described autonomous robot systems. See

"Method to Generate Self-Organizing Processes in Autonomous Mechanisms and

Organisms" (US-Patent Nr. 6172941). Every map generation of the environment is based on the

comparison of previously acquired and stored elapse-time data with actual elapse-time data.

SLAM got worldwide attention when DAPRA Grand challenge won by self-driving STANLEY

car which included a SLAM system. [1]

2.1.2 IMPLEMENTATION OF SLAM

Vacuum cleaners ( the Neato XV11)

Self-driving cars by Google

Various algorithms of SLAM are implemented in the libraries of ROS (Robot operating

system)

2.2 FORKLIFT

A forklift is an industrial truck used to lift an object and placed it at short distances. In the early

20th century forklift was developed by various companies:

The transmission manufacturing company Clark

The hoist company Yale & Towne Manufacturing

5
Chapter 2 Theoretical Background & Review Of Literature

2.2.1 HISTORY OF FORKLIFT

Forklift has been serving the industry for 90 years. Forklift was introduced by Clark in 1917 and

these machines offered for sale in 1918. Initially these forklift looked like tractors having

platform attached to it. Over the early 1920s, the design of the forklift evolved with a vertical

lifting mast such as we know today. In the 1930s Hyster Company moved from a manufacturer

of logging equipment into the world of forklift trucks.

In World War II the need of fork lift increases. This brought number of manufacturers into this

business. The US Armed forces required mobile method to move materials to the front lines for

this purpose fork lift is usable.

After world war, the need of more efficient fork lift increases this led to introduce the Narrow

Aisle Reach truck by Raymond Corporation in the early 1950s. Other manufacturers, such as

Lewis-Sheppard and Crown Equipment, introduce industries with lines of battery powered pallet

trucks.

Fork lift have been developed which increases its use to move the cargo of the world. Now fork

lifts are not only bulky industrial equipment. Todays forklift equipped with electronic

equipment operator to interface with cargo management systems and new RFID technology to

increase productivity. [2]

6
Chapter 3 Method Of Investigation Or Detailed Of The Design

CHAPTER THREE

METHOD OF INVESTIGATION OR DETAILED OF THE DESIGN

3.0 INTRODUCTION

The System model, autonomous SLAM based forklift is divided into 5 main parts. The first part is

the forward motion planning and map generation to move the forklift to the destination. The

second part is the image acquisition from fixed camera and image processing by using MATLAB

in which performed object detection by comparing with the reference image to identify the

desired object. The third part is the optical character recognition to read and the data of the desired

object. The fourth part is the reverse motion planning to return the forklift to its original position

after lifting object. Finally the fifth part is to create the communication between user interface and

the controlling device on the forklift to see the map and other information on the user interface.

Fig 3.1 Block Diagram of the proposed model

7
Chapter 3 Method Of Investigation Or Detailed Of The Design

3.1 CIRCUIT DIAGRAM

Fig 3.2 Circuit Diagram

3.2 MECHANICAL STRUCTURE

Fig 3.3 mechanical structure from front

8
Chapter 3 Method Of Investigation Or Detailed Of The Design

Fig 3.4 Mechanical structure from back

Fig 3.5 Mechanical structure from side

9
Chapter 3 Method Of Investigation Or Detailed Of The Design

3.2.1 MAIN ASSEMBLY

Fig 3.6 Main assembly of hardware

10
Chapter 3 Method Of Investigation Or Detailed Of The Design

3.2.2 DIMENSIONS

Fig 3.7 Dimensions of mechanical structure

11
Chapter 4 System Hardware

CHAPTER FOUR

SYSTEM HARDWARE

4.0 DESIGN APPLICATIONS

We have designed the cargo lifting robot in such a way that the back of the body has greater

weight so the machine can lift the weight from the front through the fork so that it may not fall

down.

4.1 MOTORS

When we talk about the motors, in normal many or almost all motors operate by interacting

between current of the windings and the magnetic field of the electrical motor which produce a

torque which can cause a motor to rotate.

For specific specification it can be used in industry which is used for the transporter system by

automate the working of all operations in which all the working motors which can be made

automatic.

An electric motor is a motor which generates or produce electrical energy by using the

mechanical energy.

General motors GP with characterized and with a standard dimensions produce mechanical

energy for firm use. A micro motor can be used in wrist watch or wall clock.

The motors can be further described as follows:

Construction

Power source

Applications

Outputs

12
Chapter 4 System Hardware

Applications of the motors which we can use in daily life are power appliances, hair dryer, fan,

toys, CD ROM, pumps, hard drives and house appliances.

The DC source produces by motors such as rectification, vehicles of motors and battery.

Direct Current voltage or direct current is used to supply a dc drive to drive a dc motor. A DC

motor has less speed as compare to an AC motor.

4.1.1 PERMANENT MAGNET DC MOTOR (PMDC):

Permanent magnet motors are used as replacement for the field windings. It has brushes,

armature and the commutator. It has the low power. The starting torque of the permanent magnet

is limited for the avoidance of the demagnetizing of the field pole.

Fig 4.1 PMDC

Advantages of the PMDC motor are as follows:

There is no field winding, only armature winding is present.

As there is no field so no input power consumption required for excitation which

increases the efficiency of the motor.

The overall size of the motor is reduced due to no field

It is cheaper and economical for fractional kW rated applications.

13
Chapter 4 System Hardware

Disadvantages of the PMDC motors:

The armature reaction of the DC motor is not compensate because of the magnetic

strength of the field gets weaker due to demagnetizing effect of the armature winding.

We cannot control externally the speed of the field because the field in the air gap is

fixed.

Applications of PMDC motors are: automobiles starter, toys, wipers, washers, hot blowers, air

conditioners, computer disc drives and in many more.

4.1.2 FOUR QUADRANT OPERATION OF MACHINE

The four quadrant operations of DC motor are as follows:

Forward braking, forward motoring, reverse motoring and reverse braking. These are four

operations that a motor can operates.

In motoring mode the electrical energy is converted into mechanical energy thus machine acts as

a motor. In braking mode the mechanical energy is converted into electrical energy thus the

machine act as a generator and it oppose the motion. The motor can run in both the directions

that is forward and backward (motoring and braking).

Fig 4.2 Four quadrant operations of drives

14
Chapter 4 System Hardware

In the I quadrant power developed is positive and the machine is working as a motor supplying

mechanical energy. The I (first) quadrant operation is called Forward Motoring. II (second)

quadrant operation is known as Braking. In this quadrant the direction of rotation is positive,

and the torque is negative, and thus, the machine operates as a generator developing a negative

torque, which opposes the motion.

The kinetic energy of the rotating parts is available as electrical energy which may be supplied

back to the mains. In dynamic braking dissipated the energy is dissipated in the resistance.

The III (third) quadrant operation is known as the reverse motoring. The motor works, in the

reverse direction. Both the speed and the torque have negative values while the power is positive.

In the IV (fourth) quadrant, the torque is positive, and the speed is negative. This quadrant

corresponds to braking in the reverse motoring mode.

Torque calculation for a dc motor:

K is constant and the torque is varies by varying the flux and the armature current Ia.

4.1.3 SEPARATELY EXCITED DC MOTOR:

In separately excited dc motor field winding is independent that of armature winding. in this

motor the armature current does not flow through the field windings and the field is energized by

a separate source as shown in figure 4.2.

15
Chapter 4 System Hardware

Fig 4.3 Separately excited dc motor

The disadvantages of separately excited dc motor are:

It needs the external dc power supply source like battery or rectifier.

If we use battery so it should be kept in charged condition.

4.1.4 HYDRAULIC MOTOR:

Hydraulic motor is used to convert the hydraulic energy into mechanical energy.It consist of a

rotating shaft which it uses hydraulic pressure and flow to generate torque and rotation.

Advantages of hydraulic motor:

Constant force and high torque.

(T=Displacement xpsi/24pi).

Ease and accuracy of control.

Simpler and easier to maintain.

Slow motion.

Leakage of oil problem.

Hydraulic fluids may catch fire in the event of leakage, especially in hot regions glycol-

ether based fluid (inflammable).

Use of dc hydraulic jack has a max range of bearing 1Ton.

16
Chapter 4 System Hardware

Hydraulic motors have many applications like we can use it in crane drives, winches, mixer, roll

mills, etc.

There are different types of hydraulic motors:

Hydraulic Gear Motors

Hydraulic Vane Motors

Hydraulic Piston Motors

4.2 HYDRAULIC GEAR MOTORS:

A gear motor is an extension of a dc motor. It has a gear box that increases torque and decreases

in speed. Hydraulic gear motors and piston motors are high speed motors. The output speed of

the shaft can be reduced by using gears. The operating pressure of the gear motor is usually 100

and 150 bar.

Fig 4.5 Basic gear alignment Fig 4.4 Working of hydraulics

17
Chapter 4 System Hardware

4.2.1 WORKING:

Hydraulic gear motor has greater torque and less speed. They reduce the speed in series of gears

which create more torque. To increase the torque output of the hydraulic motor we need gears.

The gear box contains the integrated series of gears which is attached by a main motor and shaft

is connected with a second reduction shaft. The ratio between diameters of two spiked gears

determine gear ratios ratio=d2/d1.

The greater the diameter on output side results in greater torque but less rpm.

The longer the reduction gears connected in a series (chain of gears) the slower the output at the

end.

The example of the gear is the electric time clock which has the hour, minute and second hands.

The gears of the motor rotate at certain speed of 1500 revolutions per minute to spin the rotor.

Advantages of gear motors are:

Low weight and size

Relatively high pressure

Wide range of speed

Wide temperature range

Simple and durable design

Wide viscosity range

4.3 MOTOR DRIVER

4.3.1 H-BRIDGES

H-bridge is an electronic device that enables a voltage which can be applied across a motor are

any load or it can be applied in both directions. H-bridge can be used to run the motors. These

18
Chapter 4 System Hardware

circuits are often used in robotics and other applications to allow DC motors and stepper motors

to run backwards and forward. H bridges not only used for forward and backward motion but

also for stopping the motor. When it comes to motor control and H-bridges, there are two types

of power transistors that take the main stage. The power BJT and power MOSFET. The main

difference between the two is as far as we concerned is the power loss, thats why we use the

power MOSFET in our project and also relay based H-bridge L298.

Fig 4.6 H-bridge Power MOSFET

Fig 4.7 Relay based H-bridge

4.4 RASPBERRY PI 3

The Raspberry Pi 3 is the third generation Raspberry Pi. It replaced the Raspberry Pi 2 Model B

in February 2016. Compared to the Raspberry Pi 2 it has:

A 1.2GHz 64-bit quad-core ARMv8 CPU

802.11n Wireless LAN

19
Chapter 4 System Hardware

Bluetooth 4.1

Bluetooth Low Energy (BLE)

Like the Pi 2, it also has:

1GB RAM

4 USB ports

40 GPIO pins

Full HDMI port

Ethernet port

Combined 3.5mm audio jack and composite video

Camera interface (CSI)

Display interface (DSI)

Micro SD card slot (now push-pull rather than push-push)

VideoCore IV 3D graphics core

The Raspberry Pi 3 has an identical form factor to the previous Pi 2 (and Pi 1 Model B+) and has

complete compatibility with Raspberry Pi 1 and 2.

Fig 4.8 Raspberry pi 3

20
Chapter 4 System Hardware

4.5 BATTERY

Battery is a device that consists of one more electrochemical cells. It transforms chemical energy

into electricity which is used to power electrical devices. In our project we used dry battery

LIDO whose power rating is 12ah/12v. We used battery to operate hydraulic lift, 4 motors and

Raspberry pi 3.

Fig 4.9 LIDO battery

PRECAUTIONS FOR SAFE HANDLING OF BATTERY:

We should store battery under roof for protection.

Always keep battery in a dry, cool ventilated area. Do not store near heat or open flames.

To avoid leaks containers should be protect from damage.

Conductive material should not touch the battery terminals. Battery failure and fire may occurred

due to dangerous short circuit.

21
Chapter 4 System Hardware

4.6 SENSORS

A sensor is a device which is used to detect physical data from an environment (analog data) and

convert it into digital data which is readable by our controller and make decisions on the basis of

the data provided by the sensors. There are two types of sensors:

Local sensor (Internally mounted)

Global sensor (Externally mounted)

In our project we used both the sensors local and global. While selecting the local sensor which

is used to measure the distance from surrounding environment we have three different choices

i.e. IR sensor, LIDAR (light detection and ranging) and ultrasonic sensor.

IR (infrared sensor) is an electronic device. It has high noise immunity. But we cant use this in

our project because it works on a very short range i.e. 1mm to 2mm. It can easily be affected by

physical environment like fog, dust, rain and pollution which affect data transmission and it cant

detect the objects that have a very similar temperature range.

LIDAR (light detection and ranging) is a sensor in which light is used in the form of a pulsed

laser to measure distances. It has very high range and resolution but we cant use it in our project

because it is expensive, large and precise alignment is also required.

Ultrasonic sensor is used to determine the distance by utilizing the properties of sound that is the

time difference between sending and receiving the sound pulse. It is excellent for very-near

range max 4m but accurate up to 2m. It work regardless of light levels. It is cheap and small. We

select ultrasonic sensor to measure distances.

4.6.1 ULTRASONIC SENSOR (HC-SR04)

The hardware model consists of sensors to sense the distances from the surrounding environment

simply by using ultrasonic distance sensors. We used three ultrasonic sensors in our project.

22
Chapter 4 System Hardware

Ultrasonic module HC-SR04 includes ultrasonic transmitter, receiver and control circuit.

Working parameters of HC-SR04 are:

Requires 5v for operation

Operates on 40khz frequency

For calculation of distances the trigger acts as an output and emits a 40kHz signal

The timer is initiated till the echo sensor receives an input reflected signal of 3.3v due to

1kohm resistor.

The start and stop time are calculated and subtracted to get pulse duration usually in

milliseconds.

Once time is calculated multiplying this time by 17150 results in a distance in

Centimeters.

Errors are up to +-3mm

Distance calculations in cm:

23
Chapter 4 System Hardware

Fig 4.10 Relationship between actual and calculated distances

4.6.2 ELECTRIC PARAMETERS

Table 1: Shows electric parameters of HC-SR04 [5]

24
Chapter 4 System Hardware

Fig 4.11 Ultrasonic sensor (HC-SR04)

Pins connections are:

5V VCC supply

0V ground

Trigger pulse input

Echo pulse output

4.7 CAMERA

In our project camera is used for image processing that is image similarity and optical character

recognition of desired cargo. While selecting camera we have different choices i.e. Pi cam, pixy

cam and webcam.

Pi camera is a module of Raspberry pi which can be used to take still picture and record videos.

It is tiny and 5 megapixels camera module to take high definition videos and still picture which

means it required larger bandwidth. That is why we dont use pi camera in our project because it

required larger memory and more processing time.

Pixy cam is a device capable of detecting color of the objects and to track their position. It made

robot vision very easy and it also simplified our programming. But we cant use this in our

25
Chapter 4 System Hardware

project because we have to perform OCR (optical character recognition) whose algorithm is very

difficult to perform with pixy cam. It also performs poorly when lighting condition changes.

Web cam is a USB cable connecting camera. It is low cost and convenient to use in our project.

We select a 2.1 mega pixel USB camera for image processing that is image similarity and optical

character recognition of desired cargo. It required low bandwidth which means it uses les

memory and less processing time.

Fig 4.12 2.1 megapixel camera

26
Chapter 5 System Software

CHAPTER 5

SYSTEM SOFTWARE

5.0 INTRODUCTION

Our software utilizes a series of surrounding data acquisition using local distance sensors

furthermore various image processing codes are used to detect desired Cargo. The map generated

(the path of motion) on a pygame window is wirelessly communicated to our main computer via

Wi-Fi communication. The SLAM algorithm has been broadly segmented into four parts:

Forward motion planning and map generation

Image similarity

Application of OCR

Reverse motion planning

User interface communication

All these tasks have been performed on Python.

5.1 PYTHON

Python is a programming language that lets you work more quickly and integrate your systems

more effectively. It is a user friendly platform which let you integrate more than one language

together or in simple words it is very supportive for the user and easy to implement as compare

to other programming languages.

27
Chapter 5 System Software

Python can be used in many application domains such as:

Web and Internet Development

Python offers many choices for web development:

Frameworks such as Django and Pyramid.

Micro-frameworks such as Flask and Bottle.

Advanced content management systems such as Plone and django CMS.

Python's standard library supports many Internet protocols:

HTML and XML

E-mail processing.

Support for FTP, IMAP, and other Internet protocols.

Easy-to-use socket interface.

Scientific and Numeric

Python is widely used in scientific and numeric computing:

NumPy is the primary package for scientific computing with Python.

SciPy is a collection of packages for mathematics, science, and engineering.

Pandas is a data analysis and modeling library.

IPython is a powerful interactive shell that features easy editing and recording of a work

session, and supports visualizations and parallel computing.

The Software Carpentry Course teaches basic skills for scientific computing, running

bootcamps and providing open-access teaching materials.

28
Chapter 5 System Software

Education

Python is a superb language for teaching programming, both at the introductory level and in

more advanced courses.

Books such as How to Think Like a Computer Scientist, Python Programming: An

Introduction to Computer Science, and Practical Programming.

The Education Special Interest Group is a good place to discuss teaching issues.

Desktop GUIs

The Tk GUI library is included with most binary distributions of Python.

Some toolkits that are usable on several platforms are available separately:

wxWidgets

Kivy, for writing multitouch applications.

Qt via pyqt or pyside

Platform-specific toolkits are also available:

GTK+

Microsoft Foundation Classes through the win32 extensions

Software Development

Python is often used as a support language for software developers, for build control and

management, testing, and in many other ways.

SCons for build control.

Buildbot and Apache Gump for automated continuous compilation and testing.

Roundup or Trac for bug tracking and project management.

29
Chapter 5 System Software

5.1.1 Whats New in Python?

This article explains the new features in Python 2.7.

Numeric handling has been improved in many ways, for both floating-point numbers and for

the Decimalclass. There are some useful additions to the standard library, such as a greatly

enhanced unittestmodule, the argparse module for parsing command-line options,

convenient OrderedDict and Counterclasses in the collections module, and many other

improvements.

Much as Python 2.6 incorporated features from Python 3.0, version 2.7 incorporates some of the

new features in Python 3.1. The 2.x series continues to provide tools for migrating to the 3.x

series.

5.1.2 FEATURES:

The syntax for set literals ({1,2,3} is a mutable set).

Dictionary and set comprehensions ({i: i*2 for i in range (3)}).

Multiple context managers in a single with statement.

A new version of the io library, rewritten in C for performance.

The ordered-dictionary type described in PEP 372: Adding an Ordered Dictionary to

collections.

The new "," format specifier described in PEP 378: Format Specifier for Thousands

Separator.

The memoryview object.

A small subset of the importlib module, described below.

30
Chapter 5 System Software

The repr() of a float x is shorter in many cases: its now based on the shortest decimal string

thats guaranteed to round back to x. As in previous versions of Python, its guaranteed

thatfloat(repr(x)) recovers x.

Float-to-string and string-to-float conversions are correctly rounded. The round() function is

also now correctly rounded.

The PyCapsule type, used to provide a C API for extension modules.

The PyLong_AsLongAndOverflow() C API function.

5.2 WHY PYTHON ?

So what are the major reasons why we choose Python and recommend it to as many people as

possible? It comes down to three major reasons.

5.2.1 Readability

Python very closely resembles the English language, using words like not and in to make it to

where you can very often read a program, or script, aloud to someone else and not feel like

youre speaking some arcane language. This is also helped by Pythons very strict punctuation

rules which means you dont have curly braces ({ }) all over your code.

Also, Python has a set of rules, known as PEP 8, that tell every Python developer how to format

their code. This means you always know where to put new lines and, more importantly, that

pretty much every other Python script you pick up, whether it was written by a novice or a

seasoned professional, will look very similar and be just as easy to read. The fact that my Python

code, with five or so years of experience, looks very similar to the code that Guido van Rossum

(the creator of Python) writes is such an ego boost.

31
Chapter 5 System Software

5.2.2 Libraries

Python has been around for over 20 years, so a lot of code written in Python has built up over

the decades and, being an open source language, a lot of this has been released for others to use.

Almost all of it is collected onhttps://pypi.python.org, pronounced pie-pee-eye or, more

commonly called the Cheese Shop. You can install this software on your system to be used by

your own projects. For example, if you want to use Python to build scripts with command line

arguments, youd install the click library and then import it into your scripts and use it. There

are libraries for pretty much any use case you can come up with, from image manipulation, to

scientific calculations, to server automation.

5.2.3 Community

Python has user groups everywhere, usually called PUGs, and does major conferences on every

continent other than Antarctica. PyCon NA, the largest Python conference in North America,

sold out its 2,500 tickets this year. And, reflecting Pythons commitment to diversity, it had over

30% women speakers. PyCon NA 2013 also started a trend of offering Young Coder

workshops, where attendees taught Python to kids between 9 and 16 years of age for a day,

getting them familiar with the language and, ultimately, helping them hack and mod some games

on the Raspberry Pis they were given. Being part of a positive community does a lot to keep you

motivated.

32
Chapter 5 System Software

5.3 FORWARD MOTION PLANNING AND MAP GENERATION

As our project is SLAM based cargo detecting forklift, so for creating an autonomous land

vehicle that creates its own path, Forward motion planning is the ability to empower our

microprocessor to take decisions on its own without any human input.

SLAM is an acronym of Simultaneous localization and mapping. SLAM is building a map of an

unknown environment by an autonomous robot and then follows the map generated. SLAM is

applicable for both 2D and 3D mapping but we are only considering 2D mapping. There are two

types of SLAM grid SLAM and monocular SLAM.

5.3.1 MONOCULAR SLAM

In monocular SLAM single visual camera is used. However, software used in monocular SLAM

is much more complicated because algorithms needed for molecular SLAM are complex.

Disadvantage of monocular SLAM is that from single image of a camera the depth cannot be

directly concluded. Instead, it has to be calculated through analyzing the video using an EKF

(Extended Kalman filter). [5]

5.3.2 GRID SLAM

In grid SLAM grids are used for showing an environment by using an important algorithm. With

this form of representation, the continuous spaces of the environment are discretized in such a

way that the environment is from such moment represented under the configuration of a multi-

dimensional (2D or 3D) grid or matrix. [6]

33
Chapter 5 System Software

In our project we implement both types of SLAM camera is used to recognize the desired object

by performing OCR (optical character recognition) and three ultrasonic sensors are used to

generate a map on pygame window.

Most of the approaches being implemented for slam based robots require feedback mechanism in

the form of PID (proportional, integrator and differentiator) to neutralize any deviation in

actuator motion (permanent magnet DC motor) or encoder systems to record motor movements

to attain motion information contrary to this conventional approach common in SLAM based

machines we record all forward motion in the form of array based coordinates as shown by the

Cyan and pink based turning blocks in Fig.5.1 eliminating encoder use. Motors are synchronized

using the PWM techniques hence dilute the need for a PID controller.

The environmental data is acquired in the form of distances from three ultrasonic sensors (HC-

SR04) mounted on a Raspberry pi 3 B with one sensor 0 (right) one at 90 (front) and one at

180(left) .Under static condition distance is acquired and after determining that the front

distance is greater than 0.5m the forklift commences forward .There is a 500ms (millisecond)

hiatus in forward motion as distance can only be acquired under static conditions.

Once the front distance mitigates to lower than 0.5m our sensors quantitatively analyze the left

and right values if the right distance is greater than the left distance it will turn right and vice

versa when the later statement becomes true.

The problem that comes forth is that how to portray this motion in the form of a map. This

purpose is attained by showing our map in a two dimensional Euclidean domain. For example if

our robot moves forward in a positive y axis trajectory the first right will cause motion on to the

positive x axis direction on the pygame window a second right will cause a motion towards the

negative y axis. This shows that the visual trajectory of motion is a difficult feat to achieve .The

34
Chapter 5 System Software

below logical approach that we have designed allows us to develop a robust algorithm allowing

path generation.

Fig 5.1 pygame window map generation

For path generation on pygame window we installed pygame 9.1.2b on our Raspberry pi 3. We

consider three variables a, b and code for map generation. Variables a, b are used to represent the

path while c is used to represent that whether forklift picked up the desired object or not. Initially

all the variables are equal to zero. When forklift picked up the desired object then code=0

changes into code=1 and now robot follow the reverse motion path to place the object back.

We made algorithms for all the possible conditions that robot can follow:

When the robot is at its initial position a=0 and b=0 but when it moves first forward in a

positive y-axis trajectory then the variable changes a=1 and b=1.Shown in Fig.5.2

Fig 5.2

35
Chapter 5 System Software

When the robot moves towards left it cause an increment in variable a and when it moves

towards right it cause an increment in variable b. Now consider that the robot moves first left

will cause motion on to the negative x-axis. Now variable changes to a=2 and b=1.

When robot moves first right will cause motion on to the positive x-axis. Now variable

changes to a=1 and b=2. Shown in Fig 5.3.

Fig.5.3

When robot moves second left after first left will cause motion on to the negative y-axis.

Now variable changes a=3 and b=1.

When robot moves second right after first left will cause motion on to the positive y-axis.

Now variable changes a=2 and b=2. Shown in Fig 5.4.

Fig.5.4

36
Chapter 5 System Software

When robot moves second left after first right will cause motion on to the positive y-axis.

Now variable changes a=2 and b=2. As this condition is similar to second right after first left

so coding for both the conditions will remain same.

When robot moves second right after first right will cause motion to the negative y-axis. Now

variable changes a=1 and b=3. Shown in Fig 5.5.

Fig 5.5

When robot moves third left after second and first left will cause motion on to the positive x-

axis. As this is similar to first right movement of robot so the values of variables are also

similar to first right i.e. a=1 and b=2.

When robot moves third right after second and first left will cause motion on to the negative

x-axis. As this is similar to first left movement of robot so the values of variables are also

similar to first left i.e. a=2 and b=1. Shown in Fig 5.6.

37
Chapter 5 System Software

Fig 5.6

When robot moves third left after first left and second right or after first right and second left

will cause motion in negative x-axis. As this is similar to first left movement of robot so the

variables are also similar to first left i.e. a=2 and b=1.

When robot moves third right after first left and second right or after first right and second

left will cause motion in positive x-axis. As this is similar to first right movement of robot so

the values of variables are also similar to first right i.e. a=1 and b=2. As shown in Fig 5.7.

Fig 5.7

38
Chapter 5 System Software

When robot moves third left after first and second right will cause motion in positive x-

axis. As this is similar to first right movement of robot so the values of variables are also

similar to first right i.e. a=1 and b=2.

When robot moves third right after first and second right will cause motion in negative x-

axis. As this is similar to first left movement of robot so the values of variable are also

similar to first left i.e. a=2 and b=1. As shown in Fig 5.8.

Fig 5.8

As you note that after third left and third right conditions are similar to that of first left and first

right it means that coding for all these conditions will remain same. At all the turning points of

robot coordinates save into variables r and s [r, s] which is going to be used for reverse motion of

robot. Complete for SLAM generation is shown in Fig5.9.

39
Chapter 5 System Software

Fig 5.9 Complete algorithm for map generation

5.4 PYGAME WINDOW

For initializing a window or screen for displaying our map we used command

pygame.display.set_mode (). This function will create a display surface. The arguments inside it

are required for width and height of the screen. [7]

pygame.color() this function is used to fill the color of the display screen. Its value range is 0-

255. Black color window range is (0, 0, 0).

Pygame.display.update () it only update a portion of the screen instead of entire area for

software display. After checking each condition of forward motion of robot we use this

command to upgrade our path on pygame window.

40
Chapter 5 System Software

5.5 OPEN CV

5.5.1 INTRODUCTION

Open CV (Open Source Computer Vision Library) is an open source computer vision and

machine learning software library. Open CV was built to provide a common infrastructure for

computer vision applications and to accelerate the use of machine perception in the commercial

products. Being a BSD-licensed product, Open CV makes it easy for businesses to utilize and

modify the code.

The library has more than 2500 optimized algorithms, which includes a comprehensive set of

both classic and state-of-the-art computer vision and machine learning algorithms. These

algorithms can be used to detect and recognize faces, identify objects, classify human actions in

videos, track camera movements, track moving objects, extract 3D models of objects, produce

3D point clouds from stereo cameras, stitch images together to produce a high resolution image

of an entire scene, find similar images from an image database, remove red eyes from images

taken using flash, follow eye movements, recognize scenery and establish markers to overlay it

with augmented reality, etc. OpenCV has more than 47 thousand people of user community and

estimated number of downloads exceeding 7 million. The library is used extensively in

companies, research groups and by governmental bodies.

Along with well-established companies like Google, Yahoo, Microsoft, Intel, IBM, Sony,

Honda, Toyota that employ the library, there are many startups such as Applied Minds,

VideoSurf, and Zeitera, that make extensive use of OpenCV. OpenCVs deployed uses span the

range from stitching streetview images together, detecting intrusions in surveillance video in

Israel, monitoring mine equipment in China, helping robots navigate and pick up objects at

41
Chapter 5 System Software

Willow Garage, detection of swimming pool drowning accidents in Europe, running interactive

art in Spain and New York, checking runways for debris in Turkey, inspecting labels on products

in factories around the world on to rapid face detection in Japan.

It has C++, C, Python, Java and MATLAB interfaces and supports Windows,

Linux, Android and Mac OS. OpenCV leans mostly towards real-time vision applications and

takes advantage of MMX and SSE instructions when available. A full-featured CUDA and

OpenCL interfaces are being actively developed right now. There are over 500 algorithms and

about 10 times as many functions that compose or support those algorithms. OpenCV is written

natively in C++ and has a templated interface that works seamlessly with STL containers.

In simple words, OpenCV is an open source C++ library for image processing and computer

vision, originally developed by Intel and now supported by Willow Garage. It is free for both

commercial and non-commercial use. Therefore it is not mandatory for your OpenCV

applications to be open or free.

It is a library of many inbuilt functions mainly aimed at real time image processing. Now it has

several hundreds of image processing and computer vision algorithms which make developing

advanced computer vision applications easy and efficient.

5.5.2 KEY FEATURES

Optimized for real time image processing & computer vision applications.

Primary interface of Open CV is in C++

There are also C, Python and JAVA full interfaces.

Open CV applications run on Windows, Android, Linux, Mac and I OS.

42
Chapter 5 System Software

Optimized for Intel processors

5.5.3 WHY OPEN CV?

If you are fresh to computer vision applications, you might be doubting where to start from.

Primarily you have to know the basic values of image processing and computer vision. Then you

have to select an appropriate language to advance your computer vision application. Some of the

most prevalent approaches are using OpenCV with C, MATLAB or AForge.

MATLAB is one of the easiest but the inefficient way to route images and Open CV, on the

other hand is the most efficient but also the hardest way to process pictures. Nonetheless Open

CV has lots of elementary inbuilt image processing functions so that those who want to acquire

computer vision can improve their applications through apposite understanding about what they

do.

So, we consider that it is good to learn computer vision with Open CV as we did in our project.

5.5.4 IMAGE PROCESSING

Image processing in very simple words is any type of process that we do on an image. It is

basically a collection of algorithms and tools for extracting out objects or features from an

image. Nowadays, image processing is among rapidly growing technologies. It forms core

research area within engineering and computer science disciplines too.

But before image processing we must know that what is an image?

Image is a collection of points in a matrix format. It is a bunch of values arranged with each

other and 2x2 in our case given in x and y coordinates and so every point has some relation with

points nearby it and they all have different values. In a more technical words, image is a two-

43
Chapter 5 System Software

dimensional function f(x,y), where x and y are the spatial (plane) coordinates, and the amplitude

of f at any pair of coordinates (x,y) is called the intensity of the image at that level.[12]

If x,y and the amplitude values of f are finite and discrete quantities, we call the image a

digital image. A digital image is composed of a finite number of elements called pixels, each of

which has a particular location and value.[13]

The whole goal of image processing tools is to find features to find particular aspects of objects

that we are interested in. for example the object that we are interested in has larger values and

find ways for extracting out these features so that we can extract out the objects of interest so it is

all about unique features. So in simpler case the object has larger distinct values from the

background; we could really just threshold and say any value lets say greater than one is our

object. But usually it is not that easy. So the whole point of image processing is to extract out

unique features have tools for doing that and develop algorithms for extracting out those features

so they can then pull out the particular object of interest.

Image processing fundamentally involves the following three steps:

Importing the image via image acquisition tools;

Analysing and manipulating the image;

Output in which result can be altered image or report that is based on image analysis.

There are two types of methods used for image processing namely, analogue and digital image

processing. Analogue image processing can be used for the hard copies like printouts and

photographs. Image analysts use various fundamentals of interpretation while using these visual

techniques. Digital image processing techniques help in manipulation of the digital images by

using computers. The three general phases that all types of data have to undergo while using

digital technique are pre-processing, enhancement, and display, information extraction.[14]

44
Chapter 5 System Software

Specifically, digital image processing is actually practical technology for:

Classification

Feature extraction

Multi-scale signal analysis

Pattern recognition

Projection

As we have to implement optical character recognition, among these our main focus is on

understanding the classification, feature extraction and pattern recognition to some extent.

5.6 K-NEAREST NEIGHBOR ALGORITHM

5.6.1 CLASSIFICATION WITH K-NEAREST NEIGHBORS

The classification algorithm that is used in our project is k-nearest neighbors popularly known as

knn algorithm. Generally classification in simple words is to create a model that best divides or

separates our data. For example you have a graph and on that you have some data points as

shown below.

Fig 5.10 Clustering data points

45
Chapter 5 System Software

The objective is to how to separate these into obvious groups and looking at this intuitively you

could see that there are two groups here. And by doing that, we are actually did clustering.

Classification is like we have a data set that could be easily understood by the example as shown

in fig.5.11.

Fig 5.11 Two groups of data points

There are groups having pluses and minuses and the objective is to create some type of model

that fits both of these groups that means that that properly divides them so a some sort of model

that defines the pluses and some sort of model that defines minuses so what if that you have a

data point x so which group it would be assigned to. The most likely condition is that it would

be assigned to the pluses group. Similarly a point y is somewhere close to the minuses so it

would be possibly assigned to that group. So, basically all the four points (pluses) are closer to x

than the closest minus which is pretty far. So this example actually turns out that what is done

here is nearest neighbors. So with nearest neighbors, it is a checking of who are the nearest or

closest points to the NEW point on the data. Most of the time this is used as k-nearest neighbors

where k is the no. of closest points to that new point. For instance, k=2 so it would find the two

46
Chapter 5 System Software

closest neighbors. So you have got two points that are the closest to x and y which shows that

they are plus and minus respectively as shown below.

Fig 5.12 Two closest points to 'x' and 'y' when k=2

But what if it had a point somewhere in the middle of the two groups and so the two nearest

points are shown in fig.5.13

Fig 5.13 Two closest points to 'z' when k =2

Now here, basically nearest neighbors place vote on what the identity of this new point z is so

we have a tie. So generally to avoid ties, in knn k must be an odd number.

47
Chapter 5 System Software

Fig 5.14 Three closest points to 'z' when k=3

So as shown fig5.14, in this case the vote would be minus, minus and plus so 2/3, we would say

the class is actually a negative class. Another thing that should be kept in mind that for instance

if there are three groups, k should atleast be 5 to avoid any sort of split vote. From k-nearest

neighbors not only the actual classification for the data point that has been picked can be

obtained but also the accuracy in the model so that you can train and test the model for the

models overall accuracy but each point can also have a degree of confidence. Like the example

discussed before where there is a minus, minus and a plus. So that is a 66% confidence in the

classification of that data but not only it is the confidence 66% but it can also have the entire k-

nearest neighbors model that has been trained, it can have that accuracy which is more like a

confidence.

In order to find out the closest neighbors, we have to measure that distance we used the

EUCLIDEAN distance. To find the Euclidean distance, the simplest method is to measure the

distance between any given point and all of the other points and then the closest neighbors will

be found.

48
Chapter 5 System Software

EUCLIDEAN distance:

Named after euclid, famous mathematician popularly known as the father of geometry.

(, ) = ( )2
=1

Where, n represents the number of dimensions in the data.

For example, the coordinates of the data are:

x=(1,3)

y=(2,5)

So the Euclidean distance will be,

=2.23606

5.6.2 K-NEAREST NEIGHBORS ALGORITHM OVERVIEW

In the project, character recognition is performed via the k-nearest neighbors algorithm in open

cv. The objective is to detect alphanumeric code written on the cargo. For this purpose, it has to

detect digits from 0-9 and alphabets A-Z.

Lets suppose, to recognize the digits 0-9,there could be five training images of each digit as

shown below.

49
Chapter 5 System Software

For any machine learning process, like in this case there is a training data set of five each of the

characters 0 through 9. For example, there are five 0s, five 1s and so on. So 10 digits each having

5 training images, there is a total of 50 training images. The second thing is to resize each of the

training images to be 10 pixels x 10 pixels so in other words each image would have a total area

of 100 pixels. The image in the test set is also resized when attempting to identify in the test set

later on.

As a result of training process two parallel data structures are formed.

1) The set of 50 images.

2) The set of numbers indicating which group or classification each corresponding images

in. For example, the first five images out of the 50, the classification data structure would

be five 0s, five 1s and so we have our 50 images corresponding.

Once the training process is done then testing begins that is to identify characters under test. For

example, an unknown digit X is to be identified. So now the KNN algorithm starts its process

of identifing that unknown digit by identifying its nearest neighbors which ultimately means the

best match to X out of the training set of images as illustrated below.

50
Chapter 5 System Software

This shows that when we compared X to the training set the best possible match found is the 99

of the 100 pixels in X matched one of the training images that happen to be a zero so that would

be the nearest neighbor. The second best match 97 of the pixels matched and that happened to be

an 8. The third best match 96 of the pixel match that happened to be an 8 and so on all the way

down to the worst matches of 4 and 2 of the pixels matched. The next consideration is the k

value where k referring to the number of nearest neighbors. In our example data set, when k=1,

X=0 because there is only one nearest neighbor that should be looked and that is 0. Similarly,

when k=3 there are two 8s and one 0 so two out of these three is 8 so X=8. In the same

manner, when k=5, x=0 as there are thre0s out if five.

One specific point for choosing k value is not to choose an even number as discussed in detail in

section xx. Another thing is that k value cannot be larger than the smallest number of samples for

any of the classes. For example, there are only three training images of the character 0 but 50

images for character 1-9. So the k value cannot be larger than 3 because there are only three

0s. So that is the limitation to keep in mind in the data set.

51
Chapter 5 System Software

In the project, we worked on relatively simplified data but in the production environment, testing

with different fonts, different sizes, if going to read peoples handwriting tests thousands of

different handwriting samples are needed. But in our case, there are digits 0-9 and alphabets

A-Z having one instance of each to keep the program training time down.

To sum up all the major steps involved in what algorithm is all about which has been explained

above in detail is shown fig.5.15

52
Chapter 5 System Software

Fig 5.15 KNN Algorithm Overview

53
Chapter 5 System Software

5.7 OPTICAL CHARACTER RECOGNITION

5.7.1 INTRODUCTION

Optical character recognition colloquially known as OCR, its the ability of the ability of the

computer to convert images into strings this approach utilizes the implementation of a series of

filters to end up with a clean image.

5.7.2 STEPS WITH IMAGES:

1) Conversion of the image from RGB to gray scale.

In photography and computing, a grayscale digital image is an image in which the value

of each pixel is a single sample, that is, it carries only intensity information. Such images

are also known as black-and-white, that are composed solely of shades of gray, varying

from black at the weakest intensity to white at the strongest.[15]

The advantage of converting into gray scale is that it simplifies mathematics. It is

relatively easier to deal with (in terms of calculation) a single color channel (shades of

white/ black) than multiple color channels. In addition to it, the objective of the project

can be accomplished in gray scale images. Thus decreases the complexity and increases

the processing speed as compared to multichannel color image.

Fig 5.16a original image Fig5.16b gray scale image

54
Chapter 5 System Software

2) Blurring and Thresholding.

The main purpose of applying the Gaussian blur is to reduce the noise. We have applied

a Gaussian blur by selecting a 3x3 kernel which is two dimensional matrix. The process

involved in Gaussian blur is that we process every pixel by averaging the pixel value of

every pixel corresponding to its location in the convolution kernel and apply it back to

the image. This process is carried out from the first pixel of an image and then goes

through every corresponding pixel applying kernel until we get to the end. At this point

the image will be blurred.

After noise smoothening, the next term in the image processing is thresholding. What it

does is that it replaces each pixel in an image with a black pixel if the intensity level is

below a certain constant T or with a white pixel if the pixel intensity value is above this

threshold value. Adaptive thresholding also known as local or dynamic thresholding is

applied on the image instead of fixed thresholding. As in such cases where there is

different lighting in different areas and thus affecting the performance. So in this case,

adaptive thresolding gives better results for images with varying illumination.[15]

Adaptive threshold Gaussian is applied on our image that takes sum of weighted values

of neighboring values that are in the window. In simple words, it selects an individual

threshold for each pixel based on the range of intensity values in its local neighborhood.

Fig5.17 thresholded image

55
Chapter 5 System Software

3) Finding the possible characters in the image.

The next technique applied to the image is contour tracking in order to extract its

boundary. It is also one of the preprocessing techniques to extract information about their

general shape. Once the contour of a given pattern is extracted, its different

characteristics is examined and used as features. For that reason, correct extraction of the

contour will generate more precise features which will amplify the chances of

appropriately classifying a given pattern or character in our case.

The question arises that why to waste computational time on it and not directly collecting

the features from an image? Here comes the advantage of contouring. Basically, the

contour pixels are usually a small subset of the total number of pixels representing a

pattern. Therefore, the amount of computation is significantly condensed when we run

feature extracting algorithms on the contour instead of on the entire pattern. Since the

contour shares a lot of features with the original pattern, the feature extraction process

becomes much more efficient when performed on the contour rather on the original

pattern.[16]

Fig5.18 image contours

56
Chapter 5 System Software

After the detection of boundary or edges as a result of which shape can be detected. The main

task is started to detect the characters in the image. For this the following steps is carried out:

Finding the plates:

- Find possible characters in the scene.

- Find lists of list of matching characters in the scene.

- Extracting the plate and finding the list of possible plates.

Now finding the characters within the plate:

- Find possible characters in the plate.

- Find lists of list of matching characters in the plate.

- Recognize characters in the plates through KNN Agorithm.

Fig5.19a Possible characters in scene Fig 5.19b list of matching characters in scene

Fig 5.12c List of possible name plates on the cargo


57
Chapter 5 System Software

Fig5.19d cropped image


from the original image Fig5.19e threshold image

Fig 5.19f recognize characters

Fig5.20 possible plate on the cargo

58
Chapter 5 System Software

5.8 FLOWCHART REPRESENTATION OF OCR

Fig 5.21 Flowchart Representation of Ocr

59
Chapter 5 System Software

5.9 OBJECT DETECTION

Our aim was to detect the cargo. For this purpose, image similarity algorithm is developed that is

primarily used for enabling the robot to distinguish between desired object and an obstacle

converting the images to gray scale despite increases the similarity between the images normally

this difference is about 3-4% between the desired object and the stored image whereas this

difference widens to 10-20% between obstacle and the pre-stored image.

In the second stage ,Once the forklift comes in front of within less than 0.5m from the obstacle it

will capture an image using a 2.1 mega pixel USB camera and compares that image with the pre

stored image. If it matches with the pre-stored image it means that the cargo has been detected

and it must be lifted up by the means of forklift.

Fig 5.22 Flow chart of image comparison

This approach of image comparison was not only become complex but most importantly it was a

time consuming approach as it first image processing is done followed by extracting out the

60
Chapter 5 System Software

characters in the image and then the whole process is repeated for the image that had been

captured by the webcam every time the forklift was at a distance less than 0.5m. Another major

flaw in this approach was that it was not detecting the cargo and ocr implemented on it was not

successful.

So the next approach was a very simple yet efficient which was the user input approach through

which initially it will ask the user to feed the cargo that needs to be detected or in other words

that needs to be lifted. the same thing will happen in this approach also that is, the webcam will

capture the image when the forklift is at a distance less than 0.5m from the obstacle and then the

whole long series of image processing techniques will be implied leading to the character

recognition as a result of which only the characters from the whole image will be extracted and

then that characters will be directly compared to the user inputs one and if that matches then it

means the cargo has been detected and forklift should lift it up. In other case, it that does not

match the robot will consider it as an obstacle and will continue its path avoiding it.

5.10 REVERSE MOTION PLANNING

The fourth step requires reverse motion planning (return to initial position after lifting cargo)

this is achieved by generating a two dimensional array the array consists of the starting point, all

turning points and the point of cargo detection( as denoted in the form of pink and cyan blocks

on the pygame window shown in fig 5.1.

The length of this array will be variable depending on the number of turns we will firstly get the

length of this array then subtract the last point of turning n-1 and the pint of detection n we get

the magnitude of the difference vector using np.linalg.norm(x) using the law of proportion the

distance between the two points will determine the amount of time the motor will run in forward

61
Chapter 5 System Software

direction. Once the point n-1 is reached the two left and right sensors will allow whether to turn

left or right again we will take the difference between n-1 and n-2 points the same principle is

implemented as done previously when the subtraction from n -x results below zero the program

is halted.

Np.linalg.norm(x): This function is able to return one of eight different matrix norms, or one of

an infinite number of vector norms depending on the value of the ord parameter. X is an

Input array. If there is no axis define it means X must be 1-D or 2-D. [13]

5.11 WI-FI COMMUNICATION

5.11.1 INRODUCTION

Wi-Fi is a big buzz word in todays world. Wi-Fi is a technology that enables different electronic

devices to connect to a wireless LAN (WLAN) network which may be password protected or

open allowing the devices to access the resources of the WLAN network that lies within its

range. In simple words it provides internet access to those devices that lies within the range of

wireless network.

Wi-Fi is a type of electromagnetic radiations that uses radio waves like cell phones, televisions

and radios do. In fact, communication across a wireless network is quite like a two-way radio

communication that gives us the power to send huge amount of data wirelessly. Different types

of electromagnetic radiations are pretty similar, what make them different are their wavelengths.

Wi-Fi mainly uses ultra high frequency of 2.4 GHz (wavelength of about 12 cm) and super high

frequency of 5 GHz (wavelength of 6cm) which is certainly too petite for us to see.

62
Chapter 5 System Software

5.11.2 HOW WI-FI WORKS?

Wi-Fi is basically carrying a set of instructions or data that it sends wirelessly through radio

signals to a receiver of some sort generally a wifi card or adapter. These instructions or data are

converted to a code which only needs two different modes on and off. So for instance to

transmit a picture you need a huge amount of on off signals and for video, even more.

Fortunately electromagnetic radiations travels very fast so even something complex like video

or something that has so many signals to transmit, it is transmitted super quickly. For a wifi each

send signal has a 6 digit code and the change in height and the starting place of the wave

determines that whether the digit is on or off between gaps of no signal. The gaps between the

pulses separates the wave symbols just like the spaces between words, meaning really

+complicated instructions can be read and understood with ease.

5.11.3 IEEE 802.11 STANDARD

The IEEE 802.11 standard is a set of media access control (MAC) and physical layer (PHY)

specifications for the implementation of wireless local area network (WLAN) computer

communication in the 2.4, 3.6, 5, and 60 GHz frequency bands.

The Wi-Fi Alliance enforces the use of the Wi-Fi brand to technologies based on the IEEE

802.11 standards from the IEEE.

As with wireless adapters, many routers can use more than one 802.11 standard. Normally,

802.11b routers are to some extent less expensive than others, but because the standard is older,

they're also slower than 802.11a, 802.11g, 802.11n and 802.11ac routers. 802.11n routers are the

most common.

63
Chapter 5 System Software

5.11.4 INTERFERENCE

There are several devices that use the 2.4 GHz band. For instance, many cordless telephones,

baby monitors, microwave ovens, Bluetooth devices, security cameras, ZigBee devices and other

ISM band devices operating at the same frequency at which Wi-Fi standards 802.11b, 802.11g

and 802.11n operates which may cause interference.

Moreover, as wi-fi travels outside from its source it is affected by the objects it encounters. Wi-

Fi connections can also be distorted or the Internet speed may be lowered by having other

devices in the same area. Such as, if there are lots of wifi signals near you, parts of your signal

could be delayed due to bumping into those, making it slower.

Interference can be prevented through many ways like by changing the channels or to move from

the 2.4GHz frequency to another frequency which lacks the susceptibility to interference innate

at that frequency, for example the 5 GHz frequency for 802.11a/n.

5.11.5 AD HOC COMMUNICATION

Wi-Fi technology also allows peer to peer communication without passing through an access

point (AP). Ad hoc networks forms wireless LAN that dont require any infrastructure to work.

Ad-hoc mode can be useful and easier where you want only two devices to be connected with

each other without requiring a centralized access point. For example, there are two people sitting

in a hotel room with their laptops without Wi-Fi. So the two laptops can be directly connected

with ad hoc mode to form a temporary Wi-Fi network without needing a router.

64
Chapter 6 Result And Discussions

CHAPTER 6

RESULT AND DISCUSSIONS

6.0 TECHNICAL PROBLEM FACED DURING THE PROJECT

The main technical problem during the project apart from the software side is the over current in

motors causing the damage of h-bridges and overheating the motors. It is not unusual for a motor

rated at 2A continuous to have a higher short term current rating - maybe 4A intermittent, for

max. 10 minutes in any half hour. After troubleshooting the problem of burning h-bridges and

not running the motors properly we decided to use the relay based h bridges instead as they are

more insensitive than mosfets which was a plus in our case. In addition to it, some more reasons

or pros as to why relay based h-bridges are selected to drive the motor includes:

Relays have zero closed resistance; Semiconductors have a forward voltage drop that

wastes a little to lots of power.

Relays have infinite open resistance; Semiconductors have a

leakage that can affect attached electronics.

Relays can operate at temperature extremes; semiconductors are limited to 95C and a

little below zero.

Semiconductors can be damaged and short due to voltage-peaking, secondary-breakdown

over current, dv/dt and di/dt, Relays are primarily damaged by over-current.

Relays have a very high isolation from the control coil, Semiconductors as a rule are not

isolated from the base, gate or trigger.

But there are also some of the disadvantages which are as follows:

65
Chapter 6 Result And Discussions

Semiconductors can operate at (Megahertz) speeds; Relays are much slower at 200 hertz.

Semiconductor switches almost never wear-out; Relays have a much shorter mechanical

wear-out contact life.

Semiconductors can amplify analog signals, Relays can only open and close.

But in our case, the cons are not of great impact on the project. In fact, it works absolutely fine

except that its speed relatively falls off.

Furthermore, the relay h-bridge has optocoupler for protection. Optocouplers are commonly used

to isolate components from potentially dangerous outside sources. They can potentially offer

more protection than a diode.

6.1 RESULT

It can be concluded that most of the goals were achieved with satisfactory results. The

mechanical design introduces some challenges because of the time limitation the electrical

design has been implemented but it is achieved after facing many challenges. The biggest

challenge of the electrical design was to know the hardware and the software used in the project

and to implement the slam operations map generation, object detection and image similarity.

66
Chapter 7 Conclusion And Future Works

CHAPTER 7

CONCLUSION AND FUTURE WORKS

SLAM based robots can be said to be the pinnacle of robotic advancement in this day and age

these robots can be in future optimized to a level where self navigation and machine learning

can be made possible though our implementation saw a much rudimentary approach in robotics

that is obtaining sensory data from multiple sensors this implementation can be further

streamlined to a point where a single camera can simply manipulate data to allow it to make

decisions on its own that means Future approaches for implementation of similar projects may be

computer vision centric. One thing that comes to wildest corners of human imagination is

making this robot perform similar tasks with a mechanical alteration of machining it closer to a

humanoid henceforth moving away from the hackneyed wheel based concept of robotics.

Numerous journals have theoretically envisioned the concept of robots based automations in

warehouses creating a lager robot with more long range sensors such as the infamous

HuyokoLidar may come to be a solution to major bottleneck in the supply chain and retail side of

the industry. Mobile eye a Tel Aviv based startup headquartered in the Netherlands is a refined

example of our project that allows some basic modifications in a human controlled car to be

converted into fully autonomous vehicles similarly German giant Bosh and American inventor

Elon Musk are leaders in innovation self driving vehicles. With some design alterations the map

generating and object picking approach utilizing image processing could be utilized in sulfur

mine notorious for causing life threatening diseases. To sum it up the sky is the limit our project

has barely scratched the surface of this exciting field of Localization and mapping.

67
REFRENCES

REFERENCES:

[1] https://en.wikipedia.org/wiki/Simultaneous_localization_and_mapping#History

[2] http://www.liftsrus.com/InfoDocs/Forklift_History.html

[3] https://en.wikipedia.org/wiki/H_bridge

[4] http://www.electrical4u.com/permanent-magnet-dc-motor-or-pmdc-motor/

[5] http://www.doityourself.com/stry/how-a-gear-motor-works

[6] https://www.raspberrypi.org/products/raspberry-pi-2-model-b/

[7]https://www.doc.ic.ac.uk/project/2015/163/g1516307/computing_topics_website/introductiont

omonocular.html

[8] http://www.sciencedirect.com/science/article/pii/S1571066111001824

[9] http://www.pygame.org/docs/ref/display.html

[10] https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.norm.html

[11] http://www.micropik.com/PDF/HCSR04.pdf

[12] http://www.ijteee.org/final-print/july2015/Design-Features-Recognition-Using-Image-

Processing-Techniques.pdf

[13] http://www.ijteee.org/final-print/july2015/Design-Features-Recognition-Using-Image-

Processing-Techniques.pdf

[14] http://ijarcet.org/wp-content/uploads/IJARCET-VOL-4-ISSUE-12-4310-4313.pdf

[15] http://en.wikipedia.org/wiki/Grayscale

68
GLOSSARY

Robot: A robot is an autonomous system which exists in a physical world, can sense its

environment, and can act on it to achieve some goals.

Cartesian coordinate robot: A Cartesian coordinate robot has three linear axes of control (x, y,

z).

SLAM: Simultaneous Localization and Mapping is used for robot mapping.

Monocular Slam: In monocular SLAM single visual camera is used.

Grid Slam: In grid SLAM grids are used for showing an environment by using an important

algorithm.

Forklift: A forklift is an industrial truck used to lift an object and placed it at short distances.

Motor: An electric motor is a motor which generates or produce electrical energy by using the

mechanical energy.

PMDC: Permanent Magnet Dc Motors are used as replacement for the field windings.

Separately Excited Dc Motor: In separately excited dc motor field winding is independent that

of armature winding. In this motor the armature current does not flow through the field windings.

Hydraulic Motor: Hydraulic motor is used to convert the hydraulic energy into mechanical

energy. It consist of a rotating shaft which it uses hydraulic pressure and flow to generate torque

and rotation.

Hydraulic Gear Motor: A gear motor is an extension of a dc motor. It has a gear box that

increases torque and decreases in speed. Hydraulic gear motors and piston motors are high speed

motors.

69
GLOSSARY

H-Bridge: H-bridge is an electronic device that enables a voltage which can be applied across a

motor are any load or it can be applied in both directions. H-bridge can be used to run the

motors.

Sensor: A sensor is a device which is used to detect physical data from an environment (analog

data) and convert it into digital data

IR Sensor: Infrared sensor is an electronic device. It has high noise immunity.

LIDAR: Light detection and ranging is a sensor in which light is used in the form of a pulsed

laser to measure distances.

Ultrasonic sensor: It is used to determine the distance by utilizing the properties of sound that is

the time difference between sending and receiving the sound pulse.

Open CV: Open Source Computer Vision Library is an open source computer vision and

machine learning software library. The library has more than 2500 optimized algorithms.

Image: Image is a collection of points in a matrix format.

Image processing: Image processing is any type of process that we do on an image. It is

basically a collection of algorithms and tools for extracting out objects or features from an

image.

Pixels: A digital image is composed of a finite number of elements called pixels.

Euclidean Distance: To find the Euclidean distance, the simplest method is to measure the

distance between any given point and all of the other points and then the closest neighbors will

be found.

OCR: Optical character recognition its the ability of the ability of the computer to convert

images into strings this approach utilizes the implementation of a series of filters to end up with a

clean image.

70
GLOSSARY

Gaussian blur: It is applied to reduce the noise.

Thresholding: It replaces each pixel in an image with a black pixel if the intensity level is below

a certain constant T or with a white pixel if the pixel intensity value is above this threshold value.

Reverse Motion Planning: Return to initial position after lifting cargo.

Wi-Fi Communication: Wi-Fi is a technology that enables different electronic devices to

connect to a wireless LAN (WLAN) network.

IEEE 802.11 STANDARD: The IEEE 802.11 standard is a set of media access control (MAC)

and physical layer (PHY) specifications for the implementation of wireless local area network

(WLAN) computer communication in the 2.4, 3.6, 5, and 60 GHz frequency bands.

Ad Hoc Communication: Ad hoc networks forms wireless LAN that dont require any

infrastructure to work.

71
APPENDIX A

CODING

importpygame,sys

frompygame.locals import *

importpygame.camera

import Image

from PIL import Image

importnumpy as np

importpygame

importos

importRPi.GPIO as GPIO

import time

fromitertools import izip

import cv2

frompytesseract import image_to_string

import commands

from PIL import ImageFilter

fromitertools import product

os.putenv('SDL_FBDEV', '/dev/fb1')

pygame.init()

GPIO.setmode(GPIO.BOARD)

TRIG=12

ECHO=10

ECHO1=40

72
APPENDIX A

TRIG1=38

ECHO2=35

TRIG2=36

motor1fa=11

motor1ba=15

motor2fa=19

motor2ba=21

motor1fb=37

motor1bb=33

motor2fb=31

motor2bb=29

hydr1=23

hydr2=24

GPIO.setup(motor1fa,GPIO.OUT)

GPIO.output(motor1fa,0)

GPIO.setup(motor1ba,GPIO.OUT)

GPIO.output(motor1ba,0)

GPIO.setup(motor2fa,GPIO.OUT)

GPIO.output(motor2fa,0)

GPIO.setup(motor1ba,GPIO.OUT)

GPIO.output(motor1ba,0)

GPIO.setup(motor2ba,GPIO.OUT)

GPIO.output(motor2ba,0)

73
APPENDIX A

GPIO.setup(hydr1,GPIO.OUT)

GPIO.output(hydr1,0)

GPIO.setup(hydr2,GPIO.OUT)

GPIO.output(hydr2,0)

GPIO.setup(motor1fb,GPIO.OUT)

GPIO.output(motor1fb,0)

GPIO.setup(motor1bb,GPIO.OUT)

GPIO.output(motor1bb,0)

GPIO.setup(motor2fb,GPIO.OUT)

GPIO.output(motor2fb,0)

GPIO.setup(motor1bb,GPIO.OUT)

GPIO.output(motor1bb,0)

GPIO.setup(motor2bb,GPIO.OUT)

GPIO.output(motor2bb,0)

GPIO.setup(TRIG,GPIO.OUT)

GPIO.output(TRIG,0)

GPIO.setup(ECHO,GPIO.IN)

GPIO.setup(TRIG1,GPIO.OUT)

GPIO.output(TRIG1,0)

GPIO.setup(ECHO1,GPIO.IN)

GPIO.setup(TRIG2,GPIO.OUT)

GPIO.output(TRIG2,0)

GPIO.setup(ECHO2,GPIO.IN)

74
APPENDIX A

GPIO.setwarnings(False)

lcd = pygame.display.set_mode((1000,1000))

lcd.fill(pygame.Color(0,0,0))

defvideocap():

cap = cv2.VideoCapture(0)

while(True):

#Capture frame-by-frame

ret=cap.set(3,320)

ret=cap.set(4,240)

ret, frame = cap.read()

gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

cv2.imshow('frame',gray)

if cv2.waitKey(1) & 0xFF == ord('q'):

break

cap.release()

cv2.destroyAllWindows()

return

defcamerasimilaritys():

pygame.camera.init()

c = pygame.camera.Camera("/dev/video0",(352,288))

c.start()

image2 = c.get_image()

pygame.image.save(image2,'/home/pi/102.jpg')

75
APPENDIX A

c.stop()

image1 = cv2.imread('/home/pi/102.jpg')

gray1 = cv2.cvtColor(image1,cv2.COLOR_BGR2GRAY)

equ1=cv2.equalizeHist(gray1)

cv2.imwrite('/home/pi/102.jpg',gray1)

x = Image.open('/home/pi/102.jpg')

x = x.filter(ImageFilter.SHARPEN)

y = Image.open('/home/pi/101.jpg')

y = y.filter(ImageFilter.SHARPEN)

assertx.mode == y.mode, "diferent kind of images"

assertx.size == y.size, "different sizes"

pairs = izip(x.getdata(), y.getdata())

iflen(x.getbands()) == 1:

dif = sum(abs(p1-p2) for p1,p2 in pairs)

else:

dif = sum(abs(c1-c2) for p1,p2 in pairs for c1,c2 in zip(p1,p2))

ncomponents = x.size[0] * x.size[1] * 3

o = 100-((dif / 255.0 * 100)/ncomponents)

print o

return o

defcamerasimilarity():

pygame.camera.init()

c = pygame.camera.Camera("/dev/video0",(100,100))

76
APPENDIX A

c.start()

image1 = c.get_image()

pygame.image.save(image1,'extract.jpg')

c.stop()

#boundaries

=[([17,15,100],[50,56,200]),([86,31,4],[220,88,50]),([25,146,190],[62,174,250]),([103,86,65],[14

5,133,128])]

image=cv2.imread('extract.jpg')

img_hsv=cv2.cvtColor(image,cv2.COLOR_BGR2HSV)

lower_red =np.array([0,100,100])

upper_red=np.array([10,255,255])

mask0=cv2.inRange(img_hsv, lower_red, upper_red)

lower_red =np.array([170,50,50])

upper_red=np.array([180,255,255])

mask1=cv2.inRange(img_hsv, lower_red, upper_red)

mask=mask0+mask1

#mask=cv2.medianBlur(mask, 3)

output_image=image.copy()

output_image[np.where(mask==0)] =0

imgray=cv2.cvtColor(output_image,cv2.COLOR_BGR2GRAY)

#ret,thresh = cv2.threshold(imgray,127,255,cv2.THRESH_TRUNC)

77
APPENDIX A

thresh=cv2.adaptiveThreshold(imgray, 255,

cv2.ADAPTIVE_THRESH_GAUSSIAN_C,cv2.THRESH_BINARY_INV, 11, 1)

mask=cv2.medianBlur(thresh, 9)

mask=cv2.medianBlur(thresh, 9)

cv2.imwrite('extract.jpg',mask)

contours,

hierarchy=cv2.findContours(mask,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)

a=len(contours)

print a

if a > 1:

o=98

else:

o=20

return o

defaftersimilarity():

pygame.camera.init()

c = pygame.camera.Camera("/dev/video0",(400,400))

c.start()

image1 = c.get_image()

pygame.image.save(image1,'extract.jpg')

78
APPENDIX A

c.stop()

image=cv2.imread('extract.jpg')

imgray=cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)

kernel_sharpen = np.array([[-1,-1,-1],[-1,9,-1],[-1,-1,-1]])

out=cv2.filter2D(imgray,-1,kernel_sharpen)

gray_blur=cv2.GaussianBlur(out, (15,15),0)

thresh=cv2.adaptiveThreshold(gray_blur, 255,

cv2.ADAPTIVE_THRESH_GAUSSIAN_C,cv2.THRESH_BINARY_INV, 11, 1)

thresh=cv2.medianBlur(thresh, 9)

cv2.imwrite("thresh.jpg",thresh)

text = image_to_string(Image.open('thresh.jpg')).strip()

print text

return

def distance():

GPIO.output(TRIG,1)

time.sleep(0.00001)

GPIO.output(TRIG,0)

whileGPIO.input(ECHO)==0:

#time.sleep(0.01)

pass

start=time.time()

whileGPIO.input(ECHO)==1:

#time.sleep(0.01)

79
APPENDIX A

pass

stop=time.time()

x=(stop-start)*17150

time.sleep(0.01)

if x>3 and x<400:

x=round(x,2)

else:

x=500

return x

def distances():

GPIO.output(TRIG1,1)

time.sleep(0.00001)

GPIO.output(TRIG1,0)

whileGPIO.input(ECHO1)==0:

pass

start1=time.time()

whileGPIO.input(ECHO1)==1:

pass

stop1=time.time()

time.sleep(0.01)

y=(stop1-start1)*17150

time.sleep(0.01)

if y>3 and y<400:

80
APPENDIX A

y=round(y,2)

else:

y=500

return y

def distance2():

GPIO.output(TRIG2,1)

time.sleep(0.00001)

GPIO.output(TRIG2,0)

whileGPIO.input(ECHO2)==0:

pass

start2=time.time()

whileGPIO.input(ECHO2)==1:

pass

stop2=time.time()

time.sleep(0.01)

z=(stop2-start2)*17150

time.sleep(0.01)

if z>3 and z<400:

z=round(z, 2)

else:

z=500

81
APPENDIX A

return z

def backward():

GPIO.output(motor1fa,0)

GPIO.output(motor2fa,1)

GPIO.output(motor1ba,1)

GPIO.output(motor2ba,0)

GPIO.output(motor1fb,0)

GPIO.output(motor2fb,1)

GPIO.output(motor1bb,1)

GPIO.output(motor2bb,0)

time.sleep(0.5)

GPIO.output(motor1fa,0)

GPIO.output(motor2fa,0)

GPIO.output(motor1ba,0)

GPIO.output(motor2ba,0)

GPIO.output(motor1fb,0)

GPIO.output(motor2fb,0)

GPIO.output(motor1bb,0)

GPIO.output(motor2bb,0)

time.sleep(0.1)

return

82
APPENDIX A

def forward():

GPIO.output(motor1fa,1)

GPIO.output(motor2fa,0)

GPIO.output(motor1ba,0)

GPIO.output(motor2ba,1)

GPIO.output(motor1fb,1)

GPIO.output(motor2fb,0)

GPIO.output(motor1bb,0)

GPIO.output(motor2bb,1)

time.sleep(0.75)

GPIO.output(motor1fa,1)

GPIO.output(motor2fa,0)

GPIO.output(motor1ba,0)

GPIO.output(motor2ba,1)

GPIO.output(motor1fb,0)

GPIO.output(motor2fb,1)

GPIO.output(motor1bb,1)

GPIO.output(motor2bb,0)

time.sleep(0.1)

GPIO.output(motor1fa,0)

GPIO.output(motor2fa,0)

GPIO.output(motor1ba,0)

83
APPENDIX A

GPIO.output(motor2ba,0)

GPIO.output(motor1fb,0)

GPIO.output(motor2fb,0)

GPIO.output(motor1bb,0)

GPIO.output(motor2bb,0)

time.sleep(0.1)

return

def stop():

GPIO.output(motor1fa,0)

GPIO.output(motor2fa,0)

GPIO.output(motor1ba,0)

GPIO.output(motor2ba,0)

GPIO.output(motor1fb,0)

GPIO.output(motor2fb,0)

GPIO.output(motor1bb,0)

GPIO.output(motor2bb,0)

time.sleep(0.3)

return

def right():

GPIO.output(motor1fa,0)

GPIO.output(motor2fa,1)

GPIO.output(motor1ba,1)

GPIO.output(motor2ba,0)

84
APPENDIX A

GPIO.output(motor1fb,0)

GPIO.output(motor2fb,1)

GPIO.output(motor1bb,1)

GPIO.output(motor2bb,0)

time.sleep(0.5)

GPIO.output(motor1fa,1)

GPIO.output(motor2fa,0)

GPIO.output(motor1ba,0)

GPIO.output(motor2ba,1)

GPIO.output(motor1fb,0)

GPIO.output(motor2fb,1)

GPIO.output(motor1bb,1)

GPIO.output(motor2bb,0)

time.sleep(2.49)

GPIO.output(motor1fa,0)

GPIO.output(motor2fa,0)

GPIO.output(motor1ba,0)

GPIO.output(motor2ba,0)

GPIO.output(motor1fb,0)

GPIO.output(motor2fb,0)

GPIO.output(motor1bb,0)

GPIO.output(motor2bb,0)

85
APPENDIX A

time.sleep(1)

return

def left():

GPIO.output(motor1fa,0)

GPIO.output(motor2fa,1)

GPIO.output(motor1ba,1)

GPIO.output(motor2ba,0)

GPIO.output(motor1fb,0)

GPIO.output(motor2fb,1)

GPIO.output(motor1bb,1)

GPIO.output(motor2bb,0)

time.sleep(0.5)

GPIO.output(motor1fa,0)

GPIO.output(motor2fa,1)

GPIO.output(motor1ba,1)

GPIO.output(motor2ba,0)

GPIO.output(motor1fb,1)

GPIO.output(motor2fb,0)

GPIO.output(motor1bb,0)

GPIO.output(motor2bb,1)

time.sleep(2.49)

GPIO.output(motor1fa,0)

86
APPENDIX A

GPIO.output(motor2fa,0)

GPIO.output(motor1ba,0)

GPIO.output(motor2ba,0)

GPIO.output(motor1fb,0)

GPIO.output(motor2fb,0)

GPIO.output(motor1bb,0)

GPIO.output(motor2bb,0)

time.sleep(1)

return

def up():

GPIO.output(hydr1,1)

GPIO.output(hydr2,0)

time.sleep(30)

GPIO.output(hydr1,0)

GPIO.output(hydr2,0)

time.sleep(0.1)

return

def down():

GPIO.output(hydr1,0)

GPIO.output(hydr2,1)

time.sleep(30)

GPIO.output(hydr1,0)

GPIO.output(hydr2,0)

87
APPENDIX A

time.sleep(0.1)

return

def turnaround():

GPIO.output(motor1fa,0)

GPIO.output(motor2fa,1)

GPIO.output(motor1ba,1)

GPIO.output(motor2ba,0)

GPIO.output(motor1fb,1)

GPIO.output(motor2fb,0)

GPIO.output(motor1bb,0)

GPIO.output(motor2bb,1)

time.sleep(5)

GPIO.output(motor1fa,0)

GPIO.output(motor2fa,0)

GPIO.output(motor1ba,0)

GPIO.output(motor2ba,0)

GPIO.output(motor1fb,0)

GPIO.output(motor2fb,0)

GPIO.output(motor1bb,0)

GPIO.output(motor2bb,0)

time.sleep(0.1)

return

r=100

88
APPENDIX A

s=500

a=0

b=0

back=[]

code=0

for t in range(0,100):

if distance()>3 and distances()>3 and distance2()>40 and a==0 and b==0 and code==0:

pygame.draw.line(lcd, pygame.Color(165,123,111), [r,s], [r,s-5], 10)

pygame.display.update()

back.append((r, s))

for x in range(0,100):

forward()

stop()

pygame.draw.line(lcd, pygame.Color(0,123,111), [r,s], [r,s-5], 6)

pygame.display.update()

time.sleep(0.1)

s-=10

stop()

a=1

b=1

if distance2()<40 and camerasimilarity()<96.5:

break

89
APPENDIX A

elif distance2()<40 and camerasimilarity()>96.5:

print r

print s

pygame.draw.line(lcd, pygame.Color(100,123,222), [r,s], [r,s+5], 6)

pygame.display.update()

aftersimilarity()

up()

turnaround()

a=0

b=0

code=1

back.append((r,s))

break

elif distance()==500 and distances==500 and distance2()<40 and camerasimilarity()>94.5:

print r

print s

pygame.draw.line(lcd, pygame.Color(100,123,222), [r,s], [r,s+0.1], 6)

pygame.display.update()

turnaround()

a=0

b=0

code=1

back.append(r,s)

90
APPENDIX A

break

if distance()>distances() and distance2()<40 and a==1 and b==1:

stop()

right()

stop()

pygame.draw.line(lcd, pygame.Color(165,123,111), [r,s], [r+5,s], 10)

pygame.display.update()

back.append((r, s))

for x in range(0,100):

forward()

stop()

pygame.draw.line(lcd, pygame.Color(0,123,111), [r,s], [r+5,s], 6)

pygame.display.update()

time.sleep(0.1)

r+=10

stop()

a=1

b=2

if distance2()<40 and camerasimilarity()<94.5 :

break

elif distance2()<40 and camerasimilarity()>94.5:

print r

print s

91
APPENDIX A

pygame.draw.line(lcd, pygame.Color(100,123,222), [r,s], [r,s+0.1], 6)

pygame.display.update()

aftersimilarity()

up()

turnaround()

a=0

b=0

code=1

back.append(r,s)

break

elif distance()==500 and distances==500 and distance2()<40 and camerasimilarity()>94.5:

print r

print s

pygame.draw.line(lcd, pygame.Color(100,123,222), [r,s], [r,s+0.1], 6)

pygame.display.update()

turnaround()

a=0

b=0

code=1

back.append(r,s)

break

if distances()>distance() and distance2()<40 and a==1 and b==1:

stop()

92
APPENDIX A

left()

stop()

pygame.draw.line(lcd, pygame.Color(165,123,111), [r,s], [r-5,s], 10)

pygame.display.update()

back.append((r, s))

for x in range(0,100):

forward()

stop()

pygame.draw.line(lcd, pygame.Color(0,123,111), [r,s], [r-5,s], 6)

pygame.display.update()

time.sleep(0.1)

r-=10

stop()

a=2

b=1

if distance2()<40 and camerasimilarity()<94.5:

break

elif distance2()<40 and camerasimilarity()>94.5:

print r

print s

pygame.draw.line(lcd, pygame.Color(100,123,222), [r,s], [r,s+0.1], 6)

pygame.display.update()

aftersimilarity()

93
APPENDIX A

up()

turnaround()

a=0

b=0

code=1

back.append(r,s)

break

elif distance()==500 and distances==500 and distance2()<40 and camerasimilarity()>94.5:

print r

print s

pygame.draw.line(lcd, pygame.Color(100,123,222), [r,s], [r,s+0.1], 6)

pygame.display.update()

turnaround()

a=0

b=0

code=1

back.append(r,s)

break

if distances()>distance() and distance2()<40 and a==2 and b==1:

stop()

left()

94
APPENDIX A

stop()

pygame.draw.line(lcd, pygame.Color(165,123,111), [r,s], [r,s+10], 10)

pygame.display.update()

back.append((r, s))

for x in range(0,100):

forward()

stop()

pygame.draw.line(lcd, pygame.Color(0,123,111), [r,s], [r,s+10], 6)

pygame.display.update()

time.sleep(0.1)

s+=10

stop()

a=3

b=1

if distance2()<40 and camerasimilarity()<94.5:

break

elif distance2()<40 and camerasimilarity()>94.5:

print r

print s

pygame.draw.line(lcd, pygame.Color(100,123,222), [r,s], [r,s+0.1], 6)

pygame.display.update()

aftersimilarity()

up()

95
APPENDIX A

turnaroound()

a=0

b=0

code=1

back.append(r,s)

break

elif distance()==500 and distances==500 and distance2()<40 and camerasimilarity()>94.5:

print r

print s

pygame.draw.line(lcd, pygame.Color(100,123,222), [r,s], [r,s+0.1], 6)

pygame.display.update()

turnaround()

a=0

b=0

code=1

back.append(r,s)

break

if distance()>distances() and distance2()<40 and a==2 and b==1:

stop()

right()

stop()

96
APPENDIX A

pygame.draw.line(lcd, pygame.Color(165,123,111), [r,s], [r,s-5], 10)

pygame.display.update()

back.append((r, s))

for x in range(0,100):

forward()

stop()

pygame.draw.line(lcd, pygame.Color(0,123,111), [r,s], [r,s-5], 6)

pygame.display.update()

time.sleep(0.1)

s-=10

stop()

a=2

b=2

if distance2()<40 and camerasimilarity()<94.5:

break

elif distance2()<40 and camerasimilarity()>94.5:

print r

print s

pygame.draw.line(lcd, pygame.Color(100,123,222), [r,s], [r,s+0.1], 6)

pygame.display.update()

aftersimilarity()

up()

turnaround()

97
APPENDIX A

a=0

b=0

code=1

back.append(r,s)

break

elif distance()==500 and distances==500 and distance2()<40 and camerasimilarity()>94.5:

print r

print s

pygame.draw.line(lcd, pygame.Color(100,123,222), [r,s], [r,s+0.1], 6)

pygame.display.update()

turnaround()

a=0

b=0

code=1

back.append(r,s)

break

if distances()>distance() and distance2()<40 and a==1 and b==2:

stop()

left()

stop()

pygame.draw.line(lcd, pygame.Color(165,123,111), [r,s], [r,s-5], 10)

pygame.display.update()

98
APPENDIX A

back.append((r, s))

for x in range(0,100):

forward()

stop()

pygame.draw.line(lcd, pygame.Color(0,123,111), [r,s], [r,s-5], 6)

pygame.display.update()

time.sleep(0.1)

s-=10

stop()

a=2

b=2

if distance2()<40 and camerasimilarity()<94.5:

break

elif distance2()<40 and camerasimilarity()>94.5:

print r

print s

pygame.draw.line(lcd, pygame.Color(100,123,222), [r,s], [r,s+0.1], 6)

pygame.display.update()

up()

turnaround()

a=0

b=0

code=1

99
APPENDIX A

back.append(r,s)

break

elif distance()==500 and distances==500 and distance2()<40 and camerasimilarity()>94.5:

print r

print s

pygame.draw.line(lcd, pygame.Color(100,123,222), [r,s], [r,s+0.1], 6)

pygame.display.update()

turnaround()

a=0

b=0

code=1

back.append(r,s)

break

if distance()>distances() and distance2()<40 and a==1 and b==2:

stop()

right()

stop()

pygame.draw.line(lcd, pygame.Color(165,123,111), [r,s], [r,s+5], 10)

pygame.display.update()

back.append((r, s))

for x in range(0,100):

forward()

100
APPENDIX A

stop()

pygame.draw.line(lcd, pygame.Color(0,123,111), [r,s], [r,s+5], 6)

pygame.display.update()

time.sleep(0.1)

s+=10

stop()

a=1

b=3

if distance2()<40 and camerasimilarity()<94.5:

break

elif distance2()<40 and camerasimilarity()>94.5:

print r

print s

pygame.draw.line(lcd, pygame.Color(100,123,222), [r,s], [r,s+0.1], 6)

pygame.display.update()

up()

turnaround()

a=0

b=0

code=1

back.append(r,s)

break

elif distance()==500 and distances==500 and distance2()<40 and camerasimilarity()>94.5:

101
APPENDIX A

print r

print s

pygame.draw.line(lcd, pygame.Color(100,123,222), [r,s], [r,s+0.1], 6)

pygame.display.update()

turnaround()

a=0

b=0

code=1

back.append(r,s)

break

if distances()>distance() and distance2()<40 and a==3 and b==1:

stop()

left()

stop()

pygame.draw.line(lcd, pygame.Color(165,123,111), [r,s], [r+5,s], 10)

pygame.display.update()

back.append((r, s))

for x in range(0,100):

forward()

stop()

pygame.draw.line(lcd, pygame.Color(0,123,111), [r,s], [r+5,s], 6)

102
APPENDIX A

pygame.display.update()

time.sleep(0.1)

r+=10

stop()

a=1

b=2

if distance2()<40 and camerasimilarity()<94.5:

break

elif distance2()<40 and camerasimilarity()>94.5:

print r

print s

pygame.draw.line(lcd, pygame.Color(100,123,222), [r,s], [r,s+0.1], 6)

pygame.display.update()

up()

turnaround()

a=0

b=0

code=1

back.append(r,s)

break

elif distance()==500 and distances==500 and distance2()<40 and camerasimilarity()>94.5:

print r

print s

103
APPENDIX A

pygame.draw.line(lcd, pygame.Color(100,123,222), [r,s], [r,s+0.1], 6)

pygame.display.update()

turnaround()

a=0

b=0

code=1

back.append(r,s)

break

if distance()>distances() and distance2()<40 and a==3 and b==1:

stop()

right()

stop()

pygame.draw.line(lcd, pygame.Color(165,123,111), [r,s], [r-5,s], 10)

pygame.display.update()

back.append((r, s))

for x in range(0,100):

forward()

stop()

pygame.draw.line(lcd, pygame.Color(0,123,111), [r,s], [r-5,s], 6)

pygame.display.update()

time.sleep(0.1)

r-=10

104
APPENDIX A

stop()

a=2

b=1

if distance2()<40 and camerasimilarity()<94.5:

break

elif distance2()<40 and camerasimilarity()>94.5:

print r

print s

pygame.draw.line(lcd, pygame.Color(100,123,222), [r,s], [r,s+0.1], 6)

pygame.display.update()

up()

turnaround()

a=0

b=0

code=1

back.append(r,s)

break

elif distance()==500 and distances==500 and distance2()<40 and camerasimilarity()>94.5:

print r

print s

pygame.draw.line(lcd, pygame.Color(100,123,222), [r,s], [r,s+0.1], 6)

pygame.display.update()

turnaround()

105
APPENDIX A

a=0

b=0

code=1

back.append(r,s)

break

if distance()>distances() and distance2()<40 and a==2 and b==2:

stop()

right()

stop()

pygame.draw.line(lcd, pygame.Color(165,123,111), [r,s], [r+5,s], 10)

pygame.display.update()

back.append((r, s))

for x in range(0,100):

forward()

stop()

pygame.draw.line(lcd, pygame.Color(0,123,111), [r,s], [r+5,s], 6)

pygame.display.update()

time.sleep(0.1)

r+=10

stop()

a=1

b=2

106
APPENDIX A

if distance2()<40 and camerasimilarity()<94.5:

break

elif distance2()<40 and camerasimilarity()>94.5:

print r

print s

pygame.draw.line(lcd, pygame.Color(100,123,222), [r,s], [r,s+0.1], 6)

pygame.display.update()

up()

turnaround()

a=0

b=0

code=1

back.append(r,s)

break

elif distance()==500 and distances==500 and distance2()<40 and camerasimilarity()>94.5:

print r

print s

pygame.draw.line(lcd, pygame.Color(100,123,222), [r,s], [r,s+0.1], 6)

pygame.display.update()

turnaround()

a=0

b=0

code=1

107
APPENDIX A

back.append(r,s)

break

if distances()>distance() and distance2()<40 and a==2 and b==2:

stop()

left()

stop()

pygame.draw.line(lcd, pygame.Color(165,123,111), [r,s], [r-5,s], 10)

pygame.display.update()

back.append((r, s))

for x in range(0,100):

forward()

stop()

pygame.draw.line(lcd, pygame.Color(0,123,111), [r,s], [r-5,s], 6)

pygame.display.update()

time.sleep(0.1)

r-=10

stop()

a=2

b=1

if distance2()<40 and camerasimilarity()<94.5:

break

elif distance2()<40 and camerasimilarity()>94.5:

108
APPENDIX A

print r

print s

pygame.draw.line(lcd, pygame.Color(100,123,222), [r,s], [r,s+0.1], 6)

pygame.display.update()

turnaround()

a=0

b=0

code=1

back.append(r,s)

break

elif distance()==500 and distances==500 and distance2()<40 and camerasimilarity()>94.5:

print r

print s

pygame.draw.line(lcd, pygame.Color(100,123,222), [r,s], [r,s+0.1], 6)

pygame.display.update()

turnaround()

a=0

b=0

code=1

back.append(r,s)

break

109
APPENDIX A

if distances()>distance() and distance2()<40 and a==1 and b==3:

stop()

left()

stop()

pygame.draw.line(lcd, pygame.Color(165,123,111), [r,s], [r+5,s], 10)

pygame.display.update()

back.append((r, s))

for x in range(0,100):

forward()

stop()

pygame.draw.line(lcd, pygame.Color(0,123,111), [r,s], [r+5,s], 6)

pygame.display.update()

time.sleep(0.1)

r+=10

stop()

a=1

b=2

if distance2()<40 and camerasimilarity()<94.5:

break

elif distance2()<40 and camerasimilarity()>94.5:

print r

print s

110
APPENDIX A

pygame.draw.line(lcd, pygame.Color(100,123,222), [r,s], [r,s+0.1], 6)

pygame.display.update()

up()

turnaround()

a=0

b=0

code=1

back.append(r,s)

break

elif distance()==500 and distances==500 and distance2()<40 and camerasimilarity()>94.5:

print r

print s

pygame.draw.line(lcd, pygame.Color(100,123,222), [r,s], [r,s+0.1], 6)

pygame.display.update()

turnaround()

a=0

b=0

code=1

back.append(r,s)

break

if distance()>distances() and distance2()<25 and a==1 and b==3:

stop()

111
APPENDIX A

right()

stop()

pygame.draw.line(lcd, pygame.Color(165,123,111), [r,s], [r-5,s], 10)

pygame.display.update()

back.append((r, s))

for x in range(0,100):

forward()

stop()

pygame.draw.line(lcd, pygame.Color(0,123,111), [r,s], [r-5,s], 6)

pygame.display.update()

time.sleep(0.1)

r-=10

stop()

a=2

b=1

if distance2()<40 and camerasimilarity()<94.5:

break

elif distance2()<40 and camerasimilarity()>94.5:

print r

print s

pygame.draw.line(lcd, pygame.Color(100,123,222), [r,s], [r,s+0.1], 6)

pygame.display.update()

up()

112
APPENDIX A

turnaround()

a=0

b=0

code=1

back.append(r,s)

break

elif distance()==500 and distances==500 and distance2()<40 and camerasimilarity()>94.5:

print r

print s

pygame.draw.line(lcd, pygame.Color(100,123,222), [r,s], [r,s+0.1], 6)

pygame.display.update()

turnaround()

a=0

b=0

code=1

back.append(r,s)

break

if code==1 and a==0 and b==0:

turnaround()

t=len(back)

print t

for u in range(1,100):

if (t-u-1)>=0:

113
APPENDIX A

y=np.subtract(back(t-u),back(t-u-1))

i=round(np.linalg.norm(y))

GPIO.output(motor1fa,1)

GPIO.output(motor2fa,0)

GPIO.output(motor1ba,0)

GPIO.output(motor2ba,1)

GPIO.output(motor1fb,1)

GPIO.output(motor2fb,0)

GPIO.output(motor1bb,0)

GPIO.output(motor2bb,1)

time.sleep(i)

GPIO.output(motor1fa,0)

GPIO.output(motor2fa,0)

GPIO.output(motor1ba,0)

GPIO.output(motor2ba,0)

GPIO.output(motor1fb,0)

GPIO.output(motor2fb,0)

GPIO.output(motor1bb,0)

GPIO.output(motor2bb,0)

time.sleep(2)

elif (t-u-1)<0:

sys.exit()

114
APPENDIX A

elif distance()>distances():

right()

elif distances()>distance():

left()

GPIO.cleanup()

115
APPENDIX B

COST ANALYSIS

116
APPENDIX B

117
DATASHEETS

118
DATASHEETS

119
DATASHEETS

120
TURNITIN REPORT

121

Anda mungkin juga menyukai