Anda di halaman 1dari 23

Team D: Popeye the Sailor Bot

Autonomous Robot for Maritime Navigation (ShipBot)

18-578 Mechatronic Design


12-May-2017

Team D:
Advait Deshpande
Alp Karacakol
Anand Kapadia
Kaveh Nikou
Sritanay Mandava
1. Abstract
With the rising popularity of mobile robotics, robots are rapidly evolving to fulfil a wide
variety of roles aside from their traditional role as industrial workhorses. One such applications
is as autonomous service robots on ships, to work alone or alongside other robots and humans to
operate and control the ships. Popeye the Sailor Bot is one such robot. Powered by an onboard
power supply, Popeye can locomote the upper and lower decks of a ship using a dual wheel
differential drive, localize itself using a multitude of sensors, and detect landmarks using a
monocular camera. Popeye is capable of receiving, wirelessly, tasks from a user or planner, and
accomplish them using a five degree of freedom (DoF) manipulator equipped with a soft end
effector that accounts for state errors. Built using the Robotic Operating System (ROS), Popeyes
software architecture has access to many open-source robotics libraries that can be used to
expand its capabilities in the future.
1. Abstract 1

2. Project description 4

3. Design Requirements 4
3.1 Explicit Requirements 4
3.2 Implied Requirements 4
3.3 Requirements stemming from coolness factor(s). 5

4. Functional Architecture 5
4.1 Mobility 5
4.2 Localization 5
4.3 State Detection 5
4.4 Manipulation 5

5. Design Concepts 6
5.1 Mobility 6
5.2 Localization 7
5.2 State Detection 7
5.3 Manipulation 8
5.3.1 Arm 8
5.3.2 End Effector 8

6. Cyber Physical Architecture 8

7. System Description and Evaluation 10


7.1. Descriptions/Depictions 10
7.1.1. Mobility: Base 10
7.1.2. Localization 11
7.1.3. Vision 12
7.1.4. Manipulation 12
7.2. Analysis and Testing 13
7.3. Performance Evaluation 14
7.4. Strong/Weak Points 15

8. Project management 16
8.1. Schedule 16
8.2. Budget 17
8. 3 Risk Management 18
9. Conclusions 19
9.1. Lessons learned 19
9.2. Conclusions: Future Work 20

10. References 21

11. Appendix 22
2. Project description
The Shipbot project, sponsored by Leidos, was inspired by the companys vision to bring
robot autonomy to maritime travel and operations. Leidos has already taken the first steps to this
vision by concept designing an unmanned boat capable of travelling thousands of miles without
human contact, called the ACTUV [1]. To further their vision of maritime autonomy, Leidos
would like to bring the same technology to older generation vessels. One such method to do this
would be install service robots on vessels to complete the tasks typically accomplished by
humans. This strategy avoids the high costs associated with a complete retrofit of the ships
electromechanical systems.

The goal of this project is to build a mechatronic device (or service robot), capable of
operating the electrical and mechanical devices present in the various places of a ship, such a
switches and valves. To simulate the conditions similar to that at sea, the device must be able to
operate autonomously, only guided by commands sent through a mission file.

3. Design Requirements

3.1 Explicit Requirements


The ShipBot must:
1. Operate within a 3 X 5 X 2 space and have the ability to manipulate devices within 1
outside this range oriented horizontally or vertically.
2. Not exceed initial dimensions of 1.5 X 1.5 X 2.5.
3. Be portable and have its own dedicated power supply.
4. Not damage its operating environment and devices it interacts with.
5. Localize and calibrate itself within 1 minute.
6. Must carry all necessary hardware and tools onboard.
7. Traverse the operating environment and set devices in position prescribed by mission file.
8. Operate autonomously without human intervention.
9. cost within a given budget of $1000.

3.2 Implied Requirements


The ShipBot must:
1. Localize itself prior to operating valves.
2. Detect environmental boundaries
3. Detect the current state of devices in order to manipulate them
3.3 Requirements Stemming from Coolness Factor(s).
The ShipBot must:
1. Not be limited in range by power cables or other tethers.
2. Assess state of devices before and after manipulation to ensure task completion.
3. Localize itself from any starting position.
4. Use the Robotic Operating System (ROS) as its software and communications
framework.
5. Operate with a battery.

4. Functional Architecture
Popeye the Sailor Bot is divided into four major subsystem: Mobility, Localization, State
Detection, and Manipulation. The subsystems along with a functional schematic and operational
flow are presented in figure 4.1.

4.1 Mobility
The mobility subsystem controls the locomotion of the ShipBot. Due to the
non-holonomic nature [2] of the subsystems motion, 3 types of motion primitives are possible to
achieve all points in the workspace; forward, backward, and rotational motion.

4.2 Localization
The Localization subsystem allows the ShipBot to accurately position itself in desired
positions on the gameboard, and computes the optimal paths to go from point to point. It is
physically composed of an array of limit switches and proximity sensors.

4.3 State Detection


The state detection subsystem allows the detection of the current states of devices that
need to be manipulated. It uses computer vision algorithms to detect the position, orientation,
and state (on/off) of devices.

4.4 Manipulation
The manipulation subsystem takes as input the position of the various devices. It then
plans a trajectory and computes the inverse kinematics to reach the given device. It then
manipulates the device using a versatile end-effector.
Figure 4.1 Functional schematics and operational break-down of Popeye the Sailor Bot

5. Design Concepts
This section discusses the various design concept explorations that were carried out for
the four subsystems identified in the previous section: Mobility, Localization, State Detection
and Manipulation. The rationale behind each choice is briefly explained with summaries of the
design trade studies carried out.

5.1 Mobility
Looking at common mobility solutions for robots of similar size and functional domain,
the pros and cons of two-wheel differential drives [3], holonomic drives and mecanum drives
were explored for suitability. While holonomic and omnidirectional mecanum drives provide the
advantage of excellent maneuverability; the reliability, ease of control, and the simplicity of
manufacturing strongly favored the choice of two wheel differential drives. A table presenting
the features of each is presented below (Table 5.1).
Table 5.1: Design Concept choice: Mobility
Drive Type Pros Cons
1. Only 2 actuators required 1. Maneuverability
Two-wheel 2. Relatively simple controls structure
Differential Drive required
3. Simple design/fabrication
Omnidirectional 1. Omnidirectional planar motion 1. Not as Durable as standard wheels and has
Mecanum possible more slip
2. Excellent Maneuverability 2. 4 independent actuators required
Holonomic drive 1. Omnidirectional planar motion 1. Not as Durable as standard wheels and has
possible more slip
2. Excellent Maneuverability 2. 4 independent actuators required

5.2 Localization
Localization was carried out using a combination of limit switches and ultrasonic sensors.
The input from the sensors allowed cognition of space with respect to barriers on the test bed and
limit switches were a simple solution to boundary detection. Infrared sensors suffered from the
limitation of having a nonlinear response over the operating range and variability with
surrounding light. Camera vision was also considered for localization, but proved to be
unreliable and mechanically unstable for this application.

Table 5.2: Comparison of various sensor types


Sensor Type Pros Cons
Limit Switch [4] Very Simple to use Subject to mechanical failure over time
Simple solution for contact detection
Sharp Sensor [5] Precise in its range of operation Non-Linear behavior
10-80 cm models are enough for Require good point fitting or lookup table
operating space
Not as expensive as high quality Can be affected by surrounding light source
ultrasonic
Ultrasonic High distance Range (0-6m) Accuracy can be lower for cheaper models
Sensor [6] Relatively cheap models exist on market Precise model is expensive
Vision - Camera OpenCV libraries Camera depth may not be reliable
[7] Can detect device positions During movement, vibrations may affect the
performance

5.2 State Detection


States of the devices to be manipulated required identification of colored markers on
valves to detect starting angle. Therefore, the only solution was to choose computer vision.
OpenCV was used because of its simplicity and breadth of available builtins. Edge detection
and size of bounding boxes at known distance from the valves was used to estimate their
orientation.
5.3 Manipulation

5.3.1 Arm
The choices of joint combinations explored for this application was a 5-DOF robot
suitable for this application: a 4RP, 5R and the 3R2P. While the cost of the 3R2P and 4RP
system would have been much lower, the ready availability of (borrowed) HEBI motors; along
with the easy control led us to choosing the 5R joint design for the arm.
Table 5.3: Design Concept Choice: Arm

Concept Control Fixtures Cost Aesthetics


3R2P Medium Medium Low Medium

5R Easy Complex Medium High

4RP Medium Medium Low Medium

5.3.2 End Effector


The end effector needed to perform function in which it required flexibility in some
directions and rigidity in another. Keeping this in mind, the choice of memory foam with lip was
deemed the best for the intended application.
Table 5.4: Design Concept Choice: End Effector
End effector Pros Cons
Two prongs Easy construction High localization accuracy needed for garden
valves
Memory Foam + Lip Can self-correct, localization Memory foam might wear out in time
accuracy not critical
Easy Construction
Retractable prongs Can self-correct for garden valves Complicated construction
High localization accuracy needed.
Retractable prongs and Most Versatile Complicated construction
Memory Foam Possibility of glue jamming prong guideways

6. Cyber Physical Architecture


In this section, detailed hardware and software connections are shown for the subsystems
of sensing & controls (Figure 6.1) and power systems (Figure 6.2). The Arduino [8] was chosen
to deal with the basic sensors and actuation of the robot whereas the Raspberry pi [9] was chosen
for the vision algorithm,processing, and control of HEBI modules. They are connected through a
UART serial interface allowing for sensor information to be passed to the main algorithm
running through ROS on the pi, and movement commands to be sent back to the Arduino. The
HEBI modules were used for actuation, and they communicate with the system through a router.
This router also generates a wifi signal for the robot, allowing for wireless debugging and
communication.
Figure 6.1 Sensing and Controls

Figure 6.2. Powersystem


The robot itself is powered through a 4s lithium polymer (Li-Po) battery for locomotion
and a 6S Li-Po battery for actuation, providing portability and high power. This will allow for
freedom of motion and will not inhibit locomotion. However, due to the nature of lithium
batteries, a monitoring regulator is attached and then connected to three voltage regulators: one
for the hebi module power, one for DC Motor power, and one for the microcontrollers. The
sensors themselves are powered directly from the microcontroller, whereas the motors are driven
off of drivers.
7. System Description and Evaluation

7.1. Descriptions/Depictions
An integrated snapshot of the system with its various subsystems is shown below in
figure 7.1:

Figure 7.1: Full System Depiction

7.1.1. Mobility: Base


The Base (figure 7.2) consists of an 80/20 aluminum casing supporting two acrylic plates:
on that mount the electronics, the battery and wheels and another that supports the arm, camera
and the Raspberry Pi. Custom made 3D printed covers protect the base and specially designed
mounts provide a stable resting position for the ultrasonic sensors and limit switches. Nitrile
wheels mounted to high torque brushed DC motors ensure sufficient friction with the ground for
good maneuverability. A set of freely rotating caster wheels keep the ShipBot in balance.
Figure 7.2: Subsystem Depiction: Base

7.1.2. Localization
The localization procedure (figure 7.3 and 7.4) involves a simple motion in the x
direction until both limit switches detect a contact; switching direction by rotating 90 degrees
and repeating the same procedure. Due to the setup of the game board, this orients the robot at a
point that can be set as a global origin with x and y traversal directions. This is done using a
combination of motor encoders, limit switches and the ultrasonic distance sensors; as explained
in the flowchart below:

Figure 7.3: Subsystem Depiction: Localization Figure 7: Subsystem Depiction: Localization


procedure
7.1.3. Vision
The vision detection algorithm uses opencv for color and edge detection, contour
detection, noise reducing gaussian blurs, and heavy error checking. The algorithm will first
exhaustively search only the the specific region of interest (ROI) where it expects the valve to
be, eliminating unnecessary processing and noise. It will then color detect using gaussian blurs as
low pass filters to reduce noise and lastly it will find a bounding box for the valve based on
canny edge detection and the opencv function findContours. Using the geometries of this
bounding box (height, width, area, and relative ratios), the algorithm then determines the
orientation of the valve and returns the state (figure 7.4). This state is returned to the arm for
actuation

.
Figure 7.4. Valve detection algorithm.

7.1.4. Manipulation
The arm is a construction of HEBI motors (X1s and X9s) attached with hollow aluminum
rods as connections. A total of five motors is used for the five degrees of motions required. The
flowchart of arm logic is shown in Appendix A.

The end effector has a cylindrical body with a wider lip towards the bottom that allows
for actuation of switch breakers. On the bottom face is a cuboidal piece of memory foam that is
rounded off into a hemisphere. This enables the effector to sit into the features of valves and help
grip them akin to a human hand. The body is 3D printed on an industrial grade printer and the
foam is glued with a combination of epoxy adhesive and superglue. Figure 7.5 shows a picture of
the arm with the end effector mounted on it.
Figure 7.6: Subsystem Depiction: End Effector and arm

7.2. Analysis and Testing


All of the subsystems were tested individually to achieve better performance. The base
was tested under heavy load to make sure that the motors supplied sufficient torque to locomote
under two times the estimated load of the robot. (see fig 7.7). Localization was tested for each
object position to ensure a good positioning with respect to targeted object. Vision trials were
completed with different angle of views and changing background light, including conditions
with no lights on and flashlight-enhanced brightness. This led to choosing HSV as a color
detector over RGB. Finally, the manipulation part was tested with different object positions and
also with different positioning with respect to object. The controls gains of the HEBI motors
were tuned using the position responses to various motions (figure 7.8). Since this project was
the first time that the HEBI motors were controlled through a RaspberryPi, extensive
communications testing was performed to ensure that commands/feedback could be
sent/received to and from the motors at a reasonable frequency (~50Hz). The manipulation and
vision integration were, as well, tested heavily as the arm controller was designed to react to the
valve being in slightly different places.

Figure 7.7: Test of the Base under Heavy Load


Figure 7.8: Response plot of base motor on 5DOF used to tune PD gains

Power testing led to separate batteries being used for the actuation and manipulation as it
was found that the arm would make more jerky motions when the whole system was on just one
battery. The battery would also drain within fifteen to twenty minutes, which was not enough for
proper test runs. Additionally, as communication was done through ROS (Robot Operating
System), after the initial hurdle of the setup, we found we had no further issues in subsystem
integration.

7.3. Performance Evaluation


The performance of a ShipBot can be broken down into four main subsystems:
locomotion, mobility, vision, and manipulation. These subsystems can be judged on the criteria
of ability, accuracy, and reliability. Finally, the system can be judged as a whole by analyzing
integration and error checking.

Popeye the ShipBot performed very well when looking at each subsystem separately. On
the criteria of ability and accuracy, all qualifications were passed. The ShipBot was able to:

1. accurately identify every valve (running at a minimum 5 fps).


2. accurately move to every target location.
3. localize itself given any placement on the map (but not any orientation).
4. manipulate all specified.

Additionally, the ShipBot was able to complete the task very quickly, and any mission
file with any number of stations visited could be completed within three to five minutes.
Reliability was the one category where the ShipBot had mixed results as shadows/direct sunlight
on the valves was able to affect the vision accuracy, localization was slightly inaccurate as
although each station was always visited, robot location varied in a 3 cm depth range.
Manipulation ability, while entirely reliable on its own, was affected by these two inaccuracies.
Yet, these issues were relatively infrequent and when integrated together, there was a
maximum of 25% error in manipulation and detection. Additionally there was sufficient error
checking on the ShipBot. If the valve angle was detected to be wrong even after one attempt of
manipulation, the robot tried again. Similarly, the vision and manipulation systems were tied
together to deal with 1-5cm of localization error in the x and y axis. However the robot was
unable to identify the state of breakers (only location was detected), relocalize if lost, or account
for wheel slip and encoder error.

The last factor to evaluate a ShipBot's performance is the coolness factor, or what makes
the robot unique. Popeye the ShipBot had multiple successfully implemented coolness factors
including no reliance on offboard power, usage of the Robot Operating System, and Localization
abilities from anywhere on the map. Hence, while not always able to reliably complete a mission
file, overall performance of Popeye the ShipBot was strong.

7.4. Strong/Weak Points


Some of the strong points in Popeye that deserve mention are:
1. On board Power with untethered operation - The robot is all powered by batteries and can
run upto 2 hours of continuous operation.
2. Runs just off a Raspberry Pi, Integrated using ROS - The Robot Operating System is
regarded as the new industry standard for communication in robots. It allows easy
integration of a wide range of tools including visualization of robot kinematics and sensor
data, path planning and perception algorithms, as well as low level drivers for commonly
used sensors. It also has management tools that allow monitoring and inspecting
messages.
3. Localize from anywhere -Using the limit switches and ultrasonic sensors in the front and
back can align itself to the - edges and locate the corner as the origin.
4. Compact construction - All the electrical systems are secured in stable 80/20 base and in
the resting position is one of the smallest robots
5. Versatile End Effector - The end-effector made of memory foam allows to overcome
slight inaccuracies in the vision code.

Some points of interest that can be improved upon in the upcoming iterations:
1. Depth detection - Using 2 vision devices or a different algorithm to interpolate the depth
of a device. Achieving depth of the valve or switch would greatly improve the
performance of the ShipBot.
2. Attachment for hooking on to test bed for precise localization - A guidance system with
manually latching on the guide rails would improve the accuracy of localization
3. Machine learning to improve device state detection.
8. Project management

8.1. Schedule
Table 8.1. Project Management Time Scheduling for each week
Functional Assigned Milestone/Deliver
Week Task Name Group Resources able
2 Background research on components All All N/A
3 Build Mockup Design All All Website Check #1
Preliminary robot design All All Design Proposal
Detailed design of Robot base platform Mobility Sritanay,Kaveh

Detailed design of Robot Manipulator Localization Advait, Kaveh


Begin Testing Camera Localization Alp, Anand
4 BOM for Base, manipulator All Sritanay,Advait Mockup Demo
Begin Fabrication of Base Mobility Sritanay, Kaveh
Begin Fabrication of Manipulator Manipulation Advait, Kaveh
Order Camera + RasPi Localization Alp

5 Continue construction of robot base. Mobility Sritanay,Kaveh,

Test and program robot base Software Anand


Begin mapping CV shapes Localization Alp
6 Continue Robot arm construction Manipulation Advait, Kaveh System Demo #1 (Base)
Continue mapping shapes for CV Localization Alp, Anand Website Check #2
7 Controls and programming of manipulator Software Alp, Kaveh Design Presentation
Continue mapping shapes for CV Localization Anand System Demo #2 (CV)
8 Testing of base/mobility Software Alp, Kaveh
9 Continue testing manipulator Software Anand, Alp System Demo #3
Combine Manipulator and base All Sritanay, Advait
10 Program Mobile base with manipulator Software Alp, Kaveh System Demo #4
11 Program Mobile base with manipulator Software Alp,, Kaveh
Test Mobile base w/ manipulator All All

12 Combine CV with robot Localization Alp, Anand System Demo #5


13 System Debug and fine tune All All System Dem #6
14 System Debug and fine tune All All System Demo #7
The scheduling had on the whole no major issues for all the team members. The few
things that could be paid more attention towards was ordering the right parts on time, because
any mistake made on a wrong purchase would cost weeks of waiting for the next order. One of
the other minor detail was in scheduling for the test bed which was pre-booked in the last few
weeks leaving marginal time on the test bed, this lost the team a major portion of the debugging
time.

8.2. Budget

The total available budget was $1250 and $1140 of it was used for parts majorly and the
majority of the fabrication was in-house.
Table 8.2. Itemized Total Cost of Project
Description Cost
37D mm Metal Gearmotor Bracket (Pair) $7.95
Flange Mount Ball Caster $15.02
8mm Wheel Hub $29.04
Arduino Mega $50.65
10A 5-25V Dual Channel DC Motor Driver $23.49
IMU $39.65
6" Plaction Wheel with Blue Nitrile Thread (am-3316) $49.70
12V 90 rpm 99.11oz-in 1:26.9 Brushed DC Gear Motor w/ Encoder $84.48
Very Soft Memory Foam(5.0lb) in 4.000"L x 4.000"W x 4.000"H rectangles $24.80
12V, 2.2A,5A,12A,15A Step-Down Voltage Regulator $78.8
LiPo Voltage Checker + Warning Buzzer $5.49
Snap-Action Switch with 16.3mm Roller Lever $24.35
Emergency Stop Switch $5.90
HC-SR04 Ultrasonic Range Finder - 6 $17.29
Raspberry Pi Camera Case / Enclosure $7.49
XT60 Connector Male-Female Pair, Blue $1.75
Ball Caster with 1/2 Metal Ball $7.96
T Connector Male-Female Pair $4.47
22.2V 4500mAh 45C Lipo Battery with XT60 Plug $109.99
H\Fire Resistant Bag LiPo Battery Safe Fireproof Explosion proof Guard $12.99
Red Devil 1170 Plexiglass Cutting Tool $5.40
Bracket, 2" Long for 1" High Single Profile Aluminum T-Slotted Framing Extrusion $46.80
Cable Matters (2-Pack) Cat 5e Retractable Ethernet Network $8.00
180Pcs Assorted M3 Nylon Screws $9.00
JacobsParts DC Power Pigtail Male Barrel Plug 6-Inch Wire 5.5mm x 2.1mm $3.50
Dycem Strip $13.99
SB Components Clear Case for Raspberry Pi 3 $5.97
Flex CSI Cable for Raspberry Pi Camera - 300mm / 12" X 2 $25.70
Raspberry Pi 3 with 2.5A Micro USB Power Supply (UL Listed) x 2 $81.99
berry Pi Camera Module V2 - 8 Megapixel,1080p X 3 $89.67
Multistar High Capacity 6S 10000mAh Multi-Rotor Lipo Pack $109.90
House Power Supply 12volt 6amp 72watt $11.49
Adjustable Pi Camera Mount $4.95
3D Printing $40
HEBI X Series Actuators x 5 (estimate) $5000

Total $6040.00

8. 3 Risk Management
Table 8.3. Hazard Analysis and Risk Management worksheet
Design Pro
Compone babi Impa Prevention
nt Associated Risks lity ct Methods used Mitigation solutions used
Used Ultrasonic sensors to reduce
Slipping of wheels Use high friction
Likely High acceleration and sudden jerky
causing position error wheels
movements.
Drive Not very limit current to
System Motor failure Ensure backup motors are available
Likely High motors
Correct for error using
Low
Use High torque servo vision/manipulation system which
Resolution/accuracy in Likely High
motors allows the robot to manipulation
positioning
without perfect localization.
Unable to access
Design for necessary Smartly program base of robot to
devices in correct very
Likely workspace and assist manipulator with positioning
position and/or High
Manipulator degrees of freedom prior to manipulation
orientation
Discrepancy between
Use High torque servo
target position and Likely High Correct for error using vision system
motors
manipulator position
Unable to position
Robot accurately for Likely High Use limit switches Re-localize the bot
Localization manipulation
& Vision Camera does not Used Color detection
Mediu Recheck 3 times before re-orienting
detect Valves or Likely and bounding box to
m the bot
Switches detect exactly.
Microcontroller Use of the Raspberry
Mediu
Electronics (Arduino) Will not have likely pi and communication N/A
m
the processing power between the two to
to compute optimal offload heavy
path or other high computational work
level algorithms
Battery can be
Buy a Battery
dangerous if not
Possi Very Management System
charged properly, or N/A
ble High with the Battery to
voltage levels not
monitor voltage.
monitored

Overall, the risks involved with the project were managed quite well. Good project
management allowed our team to reach milestones in sufficient time. This allowed us to detect
problems sooner than later. In addition, our design and cost management left us with enough
room the budget to buy spare parts. This proved crucial as we burned one Raspberry Pi and
damaged two cameras.

9. Conclusions
9.1. Lessons learned
There were a few key lessons that were learned from creating Popeye the Shipbot. For
instance, we learned that limit switches are very fragile and need to be designed into a system
with careful consideration. A bumper is a must for switches. Many of ours broke or hit the walls
asymmetrically and they were all replaced numerous times in the past month. A bumper would
have solved many of these issues.

On the topic of robustness, it was learned that the vision algorithm is very important get
accurate as well. Even if it seems as if color is enough to detect an object and even if color works
a majority of the time, invariance to light, brightness, and background noise is valuable. Much of
the time spent tuning could have been saved if a more involved algorithm was used.

It was also learned that integration is not as hard as it seems if planning is done
beforehand. Initially it was expected that integration of the subsystems (specifically
communication would take a week or more. With proper planning, specifications for the various
subsystem code, and the use of the Robot Operating System, integration only took a day. Yet,
tuning took longer than expected and this was a valuable lesson in time management. If a buffer
week had not been scheduled during our project management stage, much of the tuning would
not have been completed on time.
9.2. Conclusions: Future Work

A number of suggestions for future work can be derived from the lessons that were
learned over the course of the semester. One recommended vision based improvement would be
to use stereo vision to detect depth and a HOG (histogram of oriented gradients) detector based
support vector machine to detect valves. In other words, use machine learning to be more robust
to light and changes to the test bed. Additionally on vision, it would be smarter to place the
camera at a higher location. Due to an incomplete understanding of the testbed, Popeye has a
camera position slightly lower than optimal which makes certain valve orientations hard to sense.
Raising the height of the camera 6-inches would fix this.

On the overall system scale, it is also recommended to


- Add an emergency button for the whole power system
- Design better cabling to make electrical components reachable
- Design a better upper plate to make assemble/disassemble in a faster
- Add a nicer cover to seem make the robot appear more like an end product
- Add better voltage regulators for the motors to provide more stable movement

Lastly, if the robot was to be designed again entirely, it could be beneficial to design a
latching system that would allow the robot to attach itself to the rail. While having perpendicular
axis of movement and actuation resulted in a fast robot, the cost was slight variance in
locomotion -- a turn of 91 degrees instead of 90 would cause drift and inaccuracy. A latching
system to remain a fixed distance from the rail would fix this issue without compromising speed.
10. References
[1] "Perspective - ACTUV." Leidos. N.p., 22 Sept. 2016. Web. 01 Feb. 2017.

[2] Siciliano, Bruno, and Oussama Khatib. Springer Handbook of Robotics. Berlin: Springer,
2008. Print.

[3] Differential Drive Chess-EECS, Berkeley. Web.


https://chess.eecs.berkeley.edu/eecs149/documentation/differentialDrive.pdf

[4] "Infrared Proximity Sensor - Sharp GP2Y0A21YK." SEN-00242 - SparkFun Electronics.


N.p., n.d. Web. 01 Feb. 2017.

[5] "Snap-Action Switch with 50mm Lever: 3-Pin, SPDT, 5A." Pololu. N.p., n.d. Web. 01 Feb.
2017.

[6] "Ultrasonic Range Finder - LV-MaxSonar-EZ1." SEN-00639 - SparkFun Electronics. N.p.,


n.d. Web. 01 Feb. 2017.

[7] "Raspberry Pi Camera Board V2 - 8 Megapixels." Adafruit Industries Blog RSS. N.p., n.d.
Web. 01 Feb. 2017.

[8] "Raspberry Pi 3 - Model B - ARMv8 with 1G RAM." Adafruit Industries Blog RSS. N.p.,
n.d. Web. 01 Feb. 2017.

[10] "Arduino Mega 2560 R3 (Atmega2560 - Assembled)." Adafruit Industries Blog RSS. N.p.,
n.d. Web. 01 Feb. 2017.
11. Appendix

Appendix A: System Description: Arm logic Flowchart

Anda mungkin juga menyukai