Anda di halaman 1dari 42

sensors

Review
Sensor-Based Assistive Devices for Visually-Impaired
People: Current Status, Challenges, and Future Directions
Wafa Elmannai and Khaled Elleithy *
Department of Computer Science and Engineering, University of Bridgeport, Bridgeport, CT 06604, USA;
welmanna@my.bridgeport.edu
* Correspondence: elleithy@bridgeport.edu; Tel.: +1-203-576-4703

Academic Editor: Panicos Kyriacou


Received: 23 January 2017; Accepted: 1 March 2017; Published: 10 March 2017

Abstract: The World Health Organization (WHO) reported that there are 285 million
visually-impaired people worldwide. Among these individuals, there are 39 million who are totally
blind. There have been several systems designed to support visually-impaired people and to improve
the quality of their lives. Unfortunately, most of these systems are limited in their capabilities.
In this paper, we present a comparative survey of the wearable and portable assistive devices for
visually-impaired people in order to show the progress in assistive technology for this group of
people. Thus, the contribution of this literature survey is to discuss in detail the most significant
devices that are presented in the literature to assist this population and highlight the improvements,
advantages, disadvantages, and accuracy. Our aim is to address and present most of the issues
of these systems to pave the way for other researchers to design devices that ensure safety and
independent mobility to visually-impaired people.

Keywords: assistive devices; visually-impaired people; obstacles detection; navigation and


orientation systems; obstacles avoidance

1. Introduction
The World Health Organization (WHO) Fact reported that there are 285 million visually-impaired
people worldwide. Among these individuals, there are 39 million who are blind in the world [1].
More than 1.3 million are completely blind and approximately 8.7 million are visually-impaired in the
USA [2]. Of these, 100,000 are students, according to the American Foundation for the Blind [2] and
National Federation for the Blind [3]. Over the past years, blindness that is caused by diseases has
decreased due to the success of public health actions. However, the number of blind people that are
over 60 years old is increasing by 2 million per decade. Unfortunately, all these numbers are estimated
to be doubled by 2020 [4].
The need for assistive devices for navigation and orientation has increased. The simplest and the
most affordable navigations and available tools are trained dogs and the white cane [5]. Although
these tools are very popular, they cannot provide the blind with all information and features for safe
mobility, which are available to people with sight [6,7].

1.1. Assistive Technology


All the systems, services, devices and appliances that are used by disabled people to help in their
daily lives, make their activities easier, and provide a safe mobility are included under one umbrella
term: assistive technology [8].
In the 1960s, assistive technology was introduced to solve the daily problems which are related to
information transmission (such as personal care) [9], navigation and orientation aids which are related
to mobility assistance [1012].

Sensors 2017, 17, 565; doi:10.3390/s17030565 www.mdpi.com/journal/sensors


Sensors 2017, 17, 565 2 of 42

In Figure 1, visual assistive technology is divided into three categories: vision enhancement,
vision substitution, and vision replacement [12,13]. This assistive technology became available for the
blind people through electronic devices which provide the users with detection and localization of the
objects in order to offer those people with sense of the external environment using functions of sensors.
The sensors also aid the user with the mobility task based on the determination of dimensions, range
and height of the objects [6,14].
The vision replacement category is more complex than the other two categories; it deals with
medical and technology issues. Vision replacement includes displaying information directly to the
visual cortex of the brain or through an ocular nerve [12]. However, vision enhancement and vision
substitution are similar in concept; the difference is that in vision enhancement, the camera input
is processed and then the results will be visually displayed. Vision substitution is similar to vision
enhancement, yet the result constitutes non-visual display, which can be vibration, auditory or both
based on the hearing and touch senses that can be easily controlled and felt by the blind user.
The main focus in this literature survey is the vision substitution category including its three
subcategories; Electronic Travel Aid (ETAs), Electronic Orientation Aid (EOAs), and Position Locator
Devices (PLDs). Our in-depth study of all the devices that provide the after mentioned services
allows us to come up with a fair taxonomy that can classify any proposed technique among others.
The Classification of electronic devices for visually-impaired people is shown in Figure 1. Each one of
the three categories tries to enhance the blind peoples mobility with slight differences.

1.1.1. Electronic Travel Aids (ETAs)


These are devices that gather information about the surrounding environment and transfer it to
the user through sensor cameras, sonar, or laser scanners [15,16]. The rules of ETAs according to the
National Research Council [6] are:

(1) Determining obstacles around the user body from the ground to the head;
(2) Affording some instructions to the user about the movement surface consists of gaps or textures;
(3) Finding items surrounding the obstacles;
(4) Providing information about the distance between the user and the obstacle with essential
direction instructions;
(5) Proposing notable sight locations in addition to identification instructions;
(6) Affording information to give the ability of self-orientation and mental map of the surroundings.

1.1.2. Electronic Orientation Aids (EOAs)


These are devices that provide pedestrians with directions in unfamiliar places [17,18].
The guidelines of EOAs are given in [18]:

(1) Defining the route to select the best path;


(2) Tracing the path to approximately calculate the location of the user;
(3) Providing mobility instructions and path signs to guide the user and develop her/his brain about
the environment.

1.1.3. Position Locator Devices (PLD)


These are devices that determine the precise position of its holder such as devices that use
GPS technology.
Our focus in this paper is on the most significant and latest systems that provide critical services
for visually-impaired people including obstacle detection, obstacle avoidance and orientation services
containing GPS features.
Sensors
Sensors2017,
2017,17,
17,565
565 33ofof42
41

Figure 1. Classification of electronic devices for visually-impaired people.


Figure 1. Classification of electronic devices for visually-impaired people.
Sensors 2017, 17, 565 4 of 42

In Section 2, a brief description is provided for the most significant electronic devices. Analysis of
the main features for each studied device is presented in Section 3. Section 4 concludes this review
with discussion about the systems evaluation. The final section includes future directions.

2. The Most Significant Electronic Devices for Visually-impaired People


Most electronic aids that provide services for visually-impaired people depend on the data
collected from the surrounding environment (via either laser scanner, cameras sensors, or sonar) and
transmitted to the user either via tactile, audio format or both. Different opinions on which one is a
better feedback type are discussed, and this is still an open topic.
However, regardless of the services that are provided by any particular system, there are some
basic features required in that system to offer a fair performance. These features can be the key
to measuring the efficiency and reliability of any electronic device that provides navigation and
orientation services for visually-impaired people. Therefore, we present in this section a list of the
most important and latest systems with a brief summary including: what is the system, its prototype,
briefly how it works, the well-known techniques that being used in that system, and the advantages
and disadvantages. Those devices are classified in Figure 1 based on the described features in Table 1.
The comparative results based on these features will be represented in the following section with an
answer to the question of which device is the most efficient and desirable.

Table 1. The most important features that correspond to the users needs.

Feature Description
The system needs to provide a fast processing for the exchanged
information between the user and sensors. For example, the system that
Analysis Type
detects the obstacle that is 2 m in front of the user in 10 s cannot be
considered as real time system [12]
The system needs to provide its services indoors and outdoors to improve
Coverage
the quality of visually-impaired peoples lives
Time The system should perform as well in day time as at night time
It is the distance between the user and the object to be detected by the
Range system. Ideal minimum range is 0.5 m, whereas the maximum range should
be more than 5 m. Further distance is better
The system should avoid the sudden appearance objects, which means the
Object Type
system should detect the dynamic objects as the static objects

Smart Cane
Wahab et al. studied the development of the Smart Cane product for detecting the objects and
produce accurate instructions for navigation [19]. The Smart Cane was presented originally by Central
Michigan Universitys students. The design of the Smart Cane is shown in Figure 2. It is a portable
device that is equipped with a sensor system. The system consists of ultrasonic sensors, microcontroller,
vibrator, buzzer, and water detector in order to guide visually-impaired people. It uses servo motors,
ultrasonic sensors, and fuzzy controller to detect the obstacles in front of the user and then provide
instructions through voice messages or hand vibration.
The servo motors are used to give a precise position feedback. Ultrasonic sensors are used for
detecting the obstacles. Hence, the fuzzy controller is able to give the accurate decisions based on the
information received from the servo motors and ultrasonic sensors to navigate the user.
The output of the Smart Cane depends on gathering the above information to produce audio
messages through the speaker to the user. In addition, hearing impaired people have special vibrator
gloves that are provided with the Smart Cane. There is a specific vibration for each finger, and each
one has a specific meaning.
produce accurate instructions for navigation [19]. The Smart Cane was presented originally by
Central Michigan Universitys students. The design of the Smart Cane is shown in Figure 2. It is a
portable device that is equipped with a sensor system. The system consists of ultrasonic sensors,
microcontroller, vibrator, buzzer, and water detector in order to guide visually-impaired people. It
uses
Sensorsservo motors,
2017, 17, 565 ultrasonic sensors, and fuzzy controller to detect the obstacles in front of the5user
of 42
and then provide instructions through voice messages or hand vibration.

Sensors 2017, 17, 565 5 of 41

The servo motors are used to give a precise position feedback. Ultrasonic sensors are used for
detecting the obstacles. Hence, the fuzzy controller is able to give the accurate decisions based on the
information received from the servo motors and ultrasonic sensors to navigate the user.
The output of the Smart Cane depends on gathering the above information to produce audio
messages through the speaker toFigure the user.
2. TheInSmart
addition,
Canehearing
prototypeimpaired
[19]. people have special vibrator
Figure 2. The Smart Cane prototype [19].
gloves that are provided with the Smart Cane. There is a specific vibration for each finger, and each
one has a specific meaning.
The Smart
The Smart Cane
Cane hashas achieved
achieved its
its goals
goals inin detecting
detecting the the objects
objects and
and obstacles,
obstacles, producing
producing the the
needed
needed feedback. As shown
feedback. As shown in in Figure
Figure 2,
2, the
the Smart
Smart Cane
Cane isis easily
easily carried
carried and
and easily
easily bent.
bent. In
In addition,
addition,
the water sensor
the water sensor will
will not detect the
not detect the water
water unless
unless it is 0.5
it is 0.5 cm
cm or
or deeper
deeper andand the
the buzzer
buzzer ofof water
water detector
detector
will not stop before it is dried or wiped. The authors of the paper have some
will not stop before it is dried or wiped. The authors of the paper have some recommendationsrecommendations for the
for
tested system. They stated that in order to monitor the power status, it would better
the tested system. They stated that in order to monitor the power status, it would better to have a to have a power
supply meter being
power supply meterinstalled. The authors
being installed. recommended
The authors addingadding
recommended a buzzer timer totimer
a buzzer specify the period
to specify the
to solvetothe
period buzzers
solve issue asissue
the buzzers well.as well.

Eye
EyeSubstitution
Substitution
Bharambe et
Bharambe et al. developed an
al. developed an embedded
embedded device
device to
to act
act as
as an
an eye
eye substitution
substitution for
for the
the vision
vision
impaired people (VIP)
impaired people (VIP)that
thathelps
helpsinindirections
directionsand
and navigation
navigation as as shown
shown in Figure
in Figure 3 [20].
3 [20]. Mainly,
Mainly, the
the embedded device is a TI MSP 430G2553 micro-controller (Texas Instruments
embedded device is a TI MSP 430G2553 micro-controller (Texas Instruments Incorporated, Dallas, Incorporated,
Dallas, TX,The
TX, USA). USA). The authors
authors implemented
implemented the proposed
the proposed algorithms
algorithms using an using
Androidan Android application.
application. The role
The role of this application is to use GPS, improved GSM, and GPRS to get the
of this application is to use GPS, improved GSM, and GPRS to get the location of the personlocation of the person
and
and generate better directions. The embedded device consists of two HC-SR04 ultrasonic
generate better directions. The embedded device consists of two HC-SR04 ultrasonic sensors (Yuyao sensors
(Yuyao
Zhaohua Zhaohua
ElectricElectric
ApplianceAppliance
Factory,Factory,
Yuyao,Yuyao,
China),China), andvibrator
and three three vibrator
motors.motors.

Figure 3. The prototype of the eye substitution device [20].


Figure 3. The prototype of the eye substitution device [20].

The ultrasonic sensors send a sequence of ultrasonic pulses. If the obstacle is detected, then the
sound The ultrasonic
will sensors
be reflected backsend a sequence
to the receiver asofshown
ultrasonic pulses.
in Figure 4. If
Thethemicro-controller
obstacle is detected, then the
processes the
sound will be reflected back to the receiver as shown in Figure 4. The micro-controller processes
readings of the ultrasonic sensors in order to activate the motors by sending pulse width modulation. the
readings of the ultrasonic
It also provides sensors
a low power in order to
consumption activate the motors by sending pulse width modulation.
[21].
It also provides a low power consumption [21].

Sensors
TX RX

E
Figure 3. The prototype of the eye substitution device [20].

The ultrasonic sensors send a sequence of ultrasonic pulses. If the obstacle is detected, then the
sound will be reflected back to the receiver as shown in Figure 4. The micro-controller processes the
readings
Sensors 2017, 17,of the ultrasonic sensors in order to activate the motors by sending pulse width modulation.
565 6 of 42
It also provides a low power consumption [21].

Sensors
TX RX

Wall

Figure
Figure 4. Reflection
4. Reflection of sequence
of sequence of ultrasonic
of ultrasonic pulses
pulses between
between the the sender
sender andand receiver.
receiver.
Sensors 2017, 17, 565 6 of 41
The design of the device is light and very convenient. Furthermore, the system uses two sensors
The design
to overcome theof the device
issue of narrowis light
coneand very
angle asconvenient. Furthermore,
shown in Figure the system
5. So, instead uses two
of covering twosensors
ranges,
to overcome the issue of narrow cone angle as shown in Figure 5. So, instead of covering
the ultrasonic devices cover three ranges. This does not only help in detecting obstacles, but also in two ranges,
the ultrasonic
locating them.devices
However,coverthethree ranges.
design couldThis does not
be better only
if the help in
authors diddetecting
not use theobstacles, but also in
wood foundation
locating
that willthem. However,
be carried by the the
userdesign
most ofcould be better
the time. if the authors
In addition, did not
the system use
is not the wood
reliable and isfoundation
limited to
that will be carried
Android devices. by the user most of the time. In addition, the system is not reliable and is limited
to Android devices.
Sensor 1 Sensor 2

R1 R2 R3

Figure 5. Ranges that are covered by ultra-sonic sensors [20].


Figure 5. Ranges that are covered by ultra-sonic sensors [20].

Fusion of Artificial Vision and GPS (FAV&GPS)


Fusion of Artificial
An assistive Vision
device for and GPS
blind (FAV&GPS)
people was introduced in [22] to improve mapping of the users
location and positioning the surrounding
An assistive device for blind people was objects using two
introduced functions
in [22] that mapping
to improve are: basedofontheausers
map
matching approach
location and and the
positioning artificial vision [23].
surrounding The
objects firsttwo
using function
functionshelps inare:
that locating
basedtheon required object
a map matching
as well as allowing
approach the user
and artificial to give
vision [23].instructions by moving
The first function her/his
helps head toward
in locating the target.
the required The
object as second
well as
one helps in
allowing theautomatic detection
user to give of visual
instructions byaims. As shown
moving her/hisinhead
Figure 6, this the
toward device is a wearable
target. The seconddevice
one
that
helpsmounted on thedetection
in automatic users head, and it
of visual consists
aims. of twoin
As shown Bumblebee stereo
Figure 6, this cameras
device for videodevice
is a wearable input
that
that installed
mountedon on the
the helmet, GPS and
users head, receiver, headphones,
it consists microphone,
of two Bumblebee andcameras
stereo Xsens Mtifor tracking device
video input that
for motionon
installed sensing. The system
the helmet, processes
GPS receiver, the video stream
headphones, using SpikNet
microphone, and Xsens recognition algorithm
Mti tracking device[24]
for
to locatesensing.
motion the visual features
The systemthat handle the
processes the video
320 240 pixels
stream image.
using SpikNet recognition algorithm [24] to
locate the visual features that handle the 320 240 pixels image.
For fast localization and detection of such visual targets, this system integrated Global Position
System (GPS), modified Geographical Information System (GIS) [25] and vision based positioning.
This design is able to improve the performance of the navigation systems where the signal is deputized.
Therefore, this system can be combined with any navigation system to overcome the issues of the
navigation in such areas.
as well as allowing the user to give instructions by moving her/his head toward the target. The second
one helps in automatic detection of visual aims. As shown in Figure 6, this device is a wearable device
that mounted on the users head, and it consists of two Bumblebee stereo cameras for video input
that installed on the helmet, GPS receiver, headphones, microphone, and Xsens Mti tracking device
for motion
Sensors sensing.
2017, 17, 565 The system processes the video stream using SpikNet recognition algorithm7 of
[24]
42
to locate the visual features that handle the 320 240 pixels image.

Figure 6.
Figure 6. An
Anassistive
assistivedevice forfor
device blind people
blind based
people on aon
based map matching
a map approach
matching and artificial
approach vision
and artificial
[22]. [22].
vision

For fast localization and detection of such visual targets, this system integrated Global Position
Due to the lack of the availability of some information about the consistency of pedestrian mobility
System (GPS), modified Geographical Information System (GIS) [25] and vision based positioning.
by commercial
Sensors 2017, 17, 565GIS, this system maps the GPS signal with the adapting GIS to estimate the users 7 of 41
This design is able to improve the performance of the navigation systems where the signal is
current position as shown in Figure 7. The 3D targets position is calculated using matrices of lenses
deputized. Therefore, this system can be combined with any navigation system to overcome the
lenses
and and stereoscopic
stereoscopic variance.
variance. After detecting
After detecting the
the user user
and and positions,
target target positions, the agent
the vision visionsends
agentthe
sendsID
issues of the navigation in such areas.
thethe
of IDtarget
of the and
target
its and its 3D coordinates.
3D coordinates.
Due to the lack of the availability of some information about the consistency of pedestrian
mobility by commercial GIS, this system maps the GPS signal with the adapting GIS to estimate the
users current position as shown in Figure 7. The 3D targets position is calculated using matrices of

Figure 7.
7. The
Theresult
resultofofmapping
mapping both
both commercial
commercial GeographicalInformation
Geographical Information System (GIS) and
System (GIS) and Global
Global
Position
PositionSystem
System(GPS)s signals
(GPS)s is P1.
signals P2 P2
is P1. is the result
is the of mapping
result the the
of mapping signals of GPS
signals withwith
of GPS adapting GIS
adapting
[22].
GIS [22].

The matrix of
The matrix the rotation
of the rotation of
of each
each angle
angle is
is multiplied
multiplied with
with the
the target
target coordinates
coordinates in in the
the head
head
reference frame
reference frame (x,
(x, y, z) to
y, z) to obtain
obtain the
the targets
targets coordinates
coordinates in in the
the map
map reference
reference (x
(x, ,yy,z, )zas) as
given in
given
Equation (1). After that, the design uses Geographic Information System (GIS) that contain
in Equation (1). After that, the design uses Geographic Information System (GIS) that contain all all targets
goelocated
targets positions
goelocated to get the
positions longitude
to get and latitude
the longitude of landmarks.
and latitude BasedBased
of landmarks. on this
on information,
this information, the
authors
the could
authors compute
could compute the users coordinates
the users in World
coordinates Geodetic
in World System
Geodetic Coordinates
System Coordinates(WGS84). The
(WGS84).
results are in audio format through the speaker that is equipped with the
The results are in audio format through the speaker that is equipped with the device.device.

1 0 0
0 cos( ) sin( 0) .

x0 = x. 1 0
y = y 0 0sin(cos(yaw
0 ) )cos(sin(yaw ) ) .
0 (1)
cos( ) 0 sin( ) cos( ) sin( ) 0
z z 0 sin ( yaw ) cos ( yaw )
(1)
0 ) 0 1 sin( pitch
cos( pitch 0 ) . sin( cos(roll) ) cos(
sin(roll)) 00
sin( ) 1 0 cos( )
0 0 . sin0(roll ) cos0(roll ) 10


sin( pitch
The use of the modified GIS ) shows ( pitch) results and
0 cospositive 0 better estimation
0 1 of the users position
compared to the commercial GIS as shown in Figure 7. However, the system has not been tested on
navigation systems to insure its performance if it is integrated with a navigation system. So, whether
it will enhance the navigation systems or not is unknown.
Banknote Recognition (BanknoteRec)
Sensors 2017, 17, 565 8 of 42

The use of the modified GIS shows positive results and better estimation of the users position
compared to the commercial GIS as shown in Figure 7. However, the system has not been tested on
navigation systems to insure its performance if it is integrated with a navigation system. So, whether it
will enhance the navigation systems or not is unknown.
Banknote Recognition (BanknoteRec)
An assistive device for blind people was implemented in [26] to help them classify the type of
banknotes and coins. The system was built based on three models: input (OV6620 Omni vision CMOS
camera), process (SX28 microcontroller), and output (speaker).
RGB color model is used to specify the type of the banknote by calculating the average red, green,
and blue color. The function of the microcontroller (IV-CAM) with the camera mounted on a chip
is used to extract the desired data from the cameras streaming video. Then, the mean color and
the variance data will be gathered for next step when MCS-51 microcontroller starts to process this
gathered information. Based on the processing results, IC voice recorder (Aplus ap8917) records the
voice of each kind of banknote and coin.
This system compares some samplings of each kind of a banknote using RGB model. The best
matching banknote will be the result of the system. However, the coin is identified based on the size
by computing the number of pixels. To find the type of the coin, the average of pixel number of each
coin needs to be calculated. The best matching resultant coin will be the result of the device through
the speaker.
The accuracy of the results is 80% due to two factors; the difference of the color of the new and
old currency and a different light from the nature light might affect the results. On the other hand, the
device was only tested on Thai currency. Therefore, the system is not reliable, and we cannot guarantee
the efficiency of the systems performance on other types of currency. Also, the device may not identify
other banknotes than the tested if each kind of the banknote have a unique color and the coins that do
not have similar size.
Recently, similar work was presented in [27]. This device is a portable one that shows a reasonable
accuracy in detecting the Euro banknotes with a good accuracy in recognizing it by integrating well
known computer vision techniques. However, the system has a very limited scope for a particular
application such as the coins were not considered for detection and recognition. Furthermore, fake
banknotes are not detected by the system.
TED
A design of a tiny dipole antenna was developed in [28] to be connected within Tongue-placed
electro-tactile device (TED) to assist blind people in transmitting information and navigating. This
antenna is designed to establish a wireless communication between the TED device and matrix of
electrodes. The design of the antenna in front and the back is shown in Figure 8ad. Bazooka Balun is
used to reduce the effect of the cable on a small antenna [29].
The idea of a TED system that was later designed in [30] is a development of Paul Bach-Y-Rita
system into a tiny wireless system. The visual information of all video inputs are displayed into a
tactile display unit.
The design of this system as shown in Figure 9 is based on three main parts; sunglasses with
detective camera of objects, tongue electro tactile device (TED), and a host computer. The device
contains an antenna to support wireless communication in the system, a matrix of electrodes to help
the blind sensing through the tongue, a central processing block (CPU), a wireless transmission block,
an electrode controlling block, and a battery. A matrix of 33 electrodes that is distributed into 8 pulses
will be replaced into the blind persons tongue as shown in Figure 10, and the remaining components
will be fabricated into a circuit. Each pulse corresponds to a specific direction.
A design of a tiny dipole antenna was developed in [28] to be connected within Tongue-placed
electro-tactile device (TED) to assist blind people in transmitting information and navigating. This
antenna is designed to establish a wireless communication between the TED device and matrix of
electrodes.
Sensors 2017,The design of the antenna in front and the back is shown in Figure 8ad. Bazooka9 of
17, 565 Balun
42
is used to reduce the effect of the cable on a small antenna [29].

Figure 8. (a) The design of the antenna at the front and (b) at the back; (c) fabricated antenna at the
Figure 8. (a) The design of the antenna at the front and (b) at the back; (c) fabricated antenna at the
front; (d)17,at565
Sensorsfront;
2017, (d)
the
at theback
backand
and[30].
[30]. 9 of 41
Sensors 2017, 17, 565 9 of 41

The idea of a TED system that was later designed in [30] is a development of Paul Bach-Y-Rita
system into a tiny wireless system. The visual information of all video inputs are displayed into a
tactile display unit.
The design of this system as shown in Figure 9 is based on three main parts; sunglasses with
detective camera of objects, tongue electro tactile device (TED), and a host computer. The device
contains an antenna to support wireless communication in the system, a matrix of electrodes to help
the blind sensing through the tongue, a central processing block (CPU), a wireless transmission block,
an electrode controlling block, and a battery. A matrix of 33 electrodes that is distributed into 8 pulses
will be replaced into the blind persons tongue as shown in Figure 10, and the remaining components
will beFigure
Figure
Figure 9.
9. Tongue-placed
fabricated Tongue-placed electro-tactile
electro-tactile
into a electro-tactile
9. Tongue-placed circuit. Eachsystem
pulsesystem
system with
withsunglasses
corresponds
with sunglasses
sunglasses to carries
objectobject
carries
a specific
carries detection
object
direction.
detection camera
detection
camera camera
[28] [28][28]
(a) (a)
sunglasses
sunglasses with with
(a) sunglasses detective
with
detective camera
detective
camera of objects;
camera (b) (b)
of objects;
of objects; tongue
(b) tongue electro
tongue
electro tactile
electro
tactile device.
tactile
device.
device.

Figure
Figure 10. (a) Matrix
10.Matrix
(a) Matrix of electrode;
of electrode; (b) (b) Different
Different eight directions for
thethe matrix of electrodes [30].
Figure 10. (a) of electrode; (b) Different eighteight directions
directions formatrix
for the matrix of electrodes
of electrodes [30]. [30].

The The
The image
imageimage signals
that that
signals
signals that are
are sent
are sent fromfrom
sent from the
the camera
the cameracamera
to the to the
the electrodes
toelectrodes
electrodes matrix
will will
matrix
matrix be
be received
willreceived
be received by
by the
by the the
host
host computer
computer first,
first, and
and then
then it
it will
will be
be transferred
transferred in
in interpretable
interpretable information.
information. Hence,
Hence,
host computer first, and then it will be transferred in interpretable information. Hence, this converted this
this converted
converted
information
information
information will
will will be received
be received
be received by thebythe
by thewireless
wireless
wireless transmission
transmission
transmission block
block
block ofof
of the the
the
TED TED
TED device
device
device as as as shown
shown
shown inin in Figure
Figure
Figure 11.
11. Next,
Next,the
11. Next, the
theimage image
imagesignal signal
signalwill will
willbe be processed
beprocessed
processedinto into
into an an
an encodedencoded
encoded signal signal
signal by by the central processing
by the central processing block; block;
that
that that
will will be processed
be processed into into controlled
controlled signal
signal byelectrode
by the
by the the electrode
electrode controlling
controlling
controlling block
block
block afterwards.
afterwards.
afterwards. Inthe
In Inend,
the the
end,end,
the
the controlled
controlledsignal
the controlled signal
signalwill will
willbe be
besent sent
senttototheto the electrodes.
theelectrodes.
electrodes.
The image signals that are sent from the camera to the electrodes matrix will be received by the
host computer first, and then it will be transferred in interpretable information. Hence, this converted
information will be received by the wireless transmission block of the TED device as shown in Figure
11. Next, the image signal will be processed into an encoded signal by the central processing block;
that will
Sensors be17,processed
2017, 565 into controlled signal by the electrode controlling block afterwards. In the
10 end,
of 42
the controlled signal will be sent to the electrodes.

Figure 11. The overall design of the system [30].


Figure 11. The overall design of the system [30].

Although
Although this device
device meets
meets its goal and show an effective performance, the results show that the
antenna is not
Sensors 2017, completely omni-directional.
17, 565 omni-directional. It indicates that the system is not optimized and requires10 of 41
further
further tests.
tests. InInaddition,
addition,the
thedevice
devicewaswastested
testedonon
a number
a number of of
blind people.
blind TheThe
people. results show
results thatthat
show the
the user
user is not is responding
not responding to some
to some ofpulses,
of the the pulses, for example,
for example, the pulse
the pulse number
number 7. This
7. This is indicating
is indicating that
thatsystem
the the systemis notissending
not sending the pulse
the pulse to particular
to that that particular
point.point.
CASBlip
CASBlip
A
A wearable
wearable aidaid system
system for
for blind
blind people
people (CASBlip)
(CASBlip) waswas proposed
proposedin in [31].
[31]. The
The aims
aims ofof this
this design
design
are
are to
to provide
provide object
object detection,
detection, orientation,
orientation,and and navigation
navigationaid
aid for
for both
both partially
partially and
and completely
completely blind
blind
people. This system has two important modules: sensor module and acoustic
people. This system has two important modules: sensor module and acoustic module. The sensor module. The sensor
module
modulecontains
containsa apair ofof
pair glasses
glassesthatthat
includes
includes the 1X64 3D CMOS
the 1X64 3D CMOSimageimage
sensors and laser
sensors andlight
laserbeams
light
for object detection as shown in Figure 12. In addition, it has a function implemented
beams for object detection as shown in Figure 12. In addition, it has a function implemented using using Field
Programmable
Field ProgrammableGate Array (FPGA)
Gate Array that is
(FPGA) controlling
that the reflection
is controlling of the
the reflection of laser lightlight
the laser beams
beamsafter its
after
collision with enclosure object to the lenses of the camera, calculating the distance, acquisition
its collision with enclosure object to the lenses of the camera, calculating the distance, acquisition the the data,
and
data,controlling the application
and controlling software.
the application The other
software. function
The other of FPGA
function was was
of FPGA implemented
implemented within the
within
acoustic module in order to process the environmental information for locating
the acoustic module in order to process the environmental information for locating the object and the object and convert
this information
convert to sounds
this information tothat will that
sounds be received by stereophonic
will be received headphones.
by stereophonic headphones.

Figure 12.
Figure 12. Design
Design of
of the
the sensor
sensor module
module [31].
[31].

The developed acoustic system in [31] allows the user to choose the route and path after
detecting the presence of the object and user. However, the small range of this device can cause a
serious incident. The system was tested on two different groups of blind people. However, the results
of outdoor experiments were not as good as the indoor experiments. This was because of the external
noise. One of the recommendations to further develop this system is to use stereovision or add more
sensors for improving the image acquisition.
Sensors 2017, 17, 565 11 of 42

The developed acoustic system in [31] allows the user to choose the route and path after detecting
the presence of the object and user. However, the small range of this device can cause a serious incident.
The system was tested on two different groups of blind people. However, the results of outdoor
experiments were not as good as the indoor experiments. This was because of the external noise.
One of the recommendations to further develop this system is to use stereovision or add more sensors
for improving the image acquisition.
RFIWS
A Radio Frequency Identification Walking Stick (RFIWS) was designed in [32] in order to
help blind people navigating on their sidewalk. This system helps in detecting and calculating
the approximate distance between the sidewalk border and the blind person. A Radio Frequency
Identification (RFID) is used to transfer and receive information through radio wave medium [33].
RFID tag, reader, and middle are the main components of RFID technology.
A number of RFID tags are placed in the middle of the sidewalk with consideration of an equal
and specific distance between each other and RFID reader. The RFID will be connected to the stick
in order to detect and process received signals. Sounds and vibrations will be produced to notify the
Sensors 2017, 17, 565 11 of 41
user with the distance between the border of the sidewalk and himself/herself. Louder sounds will be
generated
working inascase the of
user gets closer
wrapping to the border.
or covering Figure
the tags 13 shows
which thethose
prevents distance
tagsoffrom
frequency detection
receiving (Y)
the radio
and width
waves. of sidewalk (X). Each tag needs to be tested separately due to different ranges of detection.

Figure 13. Distance of the frequency detection on sidewalk [32].


Figure 13. Distance of the frequency detection on sidewalk [32].

A Low Cost Outdoor Assistive Navigation System (LowCost Nav)


RFID technology has a perfect reading function between the tags and readers that makes the
deviceA reliable
navigator with
in the 3Dofsound
level system
detection. was developed
However, in [34]a to
each tag needs help range
specific blind which
peoplerequires
in navigating.
a lot of
The device is packet on the users waist with Raspberry Pi, GPS receiver and
individual testing, that leads to scope limitation. Also, the system can be easily stopped fromthree main buttons
working to
runcase
in theof
system as shown
wrapping in Figure
or covering the14.tags which prevents those tags from receiving the radio waves.
The user can select a comfortable sound from recorded list to receive the navigation steps as an
A Lowformat.
audible Cost Outdoor
So, theAssistive Navigation
device is providedSystem
with (LowCost Nav) and speech recognition for better
voice prompts
capabilities. The system
A navigator with 3D calculates the distance
sound system between the
was developed userto
in [34] and
helptheblind
objectpeople
by using gyroscope
in navigating.
The device is packet on the users waist with Raspberry Pi, GPS receiver and three main buttons toBoth
and magnetic compass. Furthermore, the Raspberry Pi controls the process of the navigation. run
Mo system
the Nav and asGeo-Coder-US
shown in Figure modules
14. were used for pedestrian route generation. So, the system works
as following:
The userthecanuser canajust
select use the microphone
comfortable sound from to recorded
state the desired addressthe
list to receive or navigation
use one of the three
steps as
provided buttons if the address already is stored in the system. User can press up
an audible format. So, the device is provided with voice prompts and speech recognition for better button for choosing
stored address,
capabilities. Thee.g., home,
system or entering
calculates theadistance
new address by pressing
between the userthe down
and buttonby
the object and startgyroscope
using recording
the new address. The middle button will be selected to continue after the device
and magnetic compass. Furthermore, the Raspberry Pi controls the process of the navigation. Both ensure that Mo
the
selected address is the correct address.
The system is composed of five main modules: loader is the controller of the system, initializer
that verifies the existence of the required data and libraries, user interface that receives the desired
address from the user, the address query that translates entered address to geographic coordinates,
the route query obtains the users current location from GPS, and the route transversal that gives the
A navigator with 3D sound system was developed in [34] to help blind people in navigating.
The device is packet on the users waist with Raspberry Pi, GPS receiver and three main buttons to
run the system as shown in Figure 14.
The user can select a comfortable sound from recorded list to receive the navigation steps as an
audible
Sensors format.
2017, 17, 565 So, the device is provided with voice prompts and speech recognition for 12 better
of 42
capabilities. The system calculates the distance between the user and the object by using gyroscope
and magnetic compass. Furthermore, the Raspberry Pi controls the process of the navigation. Both
Nav
Mo NavandandGeo-Coder-US
Geo-Coder-US modules
moduleswere used
were for pedestrian
used for pedestrianroute generation.
route So,So,
generation. thethe
system
systemworks
worksas
following: the user can just use the microphone to state the desired address or use
as following: the user can just use the microphone to state the desired address or use one of the three one of the three
provided
provided buttons
buttons ifif the
the address
address already
already isis stored
stored in
in the
the system.
system. User
User can
can press
press up
up button
button for choosing
for choosing
stored address, e.g., home, or entering a new address by pressing the down button
stored address, e.g., home, or entering a new address by pressing the down button and start recording and start recording
the
the new
new address.
address. The The middle
middle button
button will
will bebe selected
selected to to continue
continue after
after the
the device
device ensure
ensure that
that the
the
selected address is the correct
selected address is the correct address.address.
The
The system
system is is composed
composed of of five
five main
main modules:
modules: loader
loader is
is the
the controller
controller ofof the
the system,
system, initializer
initializer
that
that verifies
verifies the
the existence
existence of of the
the required
required data
data and
and libraries,
libraries, user
user interface
interface that
that receives
receives the
the desired
desired
address from the user, the address query that translates entered address to geographic
address from the user, the address query that translates entered address to geographic coordinates, coordinates,
the
the route
route query
query obtains
obtains the
the users current location
users current location from
from GPS,
GPS, and
and the
the route
route transversal
transversal that
that gives
gives the
the
audible instructions to the user to get to his destination.
audible instructions to the user to get to his destination.

Figure 14. The prototype of the proposed device [34].


Figure 14. The prototype of the proposed device [34].
Sensors 2017, 17, 565 12 of 41
This device shows a good performance within residential area as shown in Figure 15a. It is also
This device shows
an economically cheap afor
good performance
a low withinInresidential
income people. addition, area as shown
the device in Figure
is light 15a. It
and easy toiscarry.
also
an economically cheap for a low income people. In addition, the device is light and easy
However, the device shows a low performance in civilian area where tall buildings are existence dueto carry.
However,
to the low the device
accurate shows a lowofperformance
performance inreceiver
the used GPS civilian as
area where
shown intall buildings
Figure 15b. are existence due
to the low accurate performance of the used GPS receiver as shown in Figure 15b.

(a) (b)
Figure 15.
Figure 15. (a)
(a) The
The results
results of
of the
the devices
devices orientation
orientation in
in residential
residential area;
area; (b)
(b) The
The results
results of
of the
the devices
devices
orientation in civilian [34].
orientation in civilian [34].

ELC
ELC
The proposed electronic long cane (ELC) is based on haptics technology which was presented
by A.R.
The Garcia
proposed et electronic
al. for thelong
mobility aid toisthe
cane (ELC) blind
based on people [35]. ELC iswhich
haptics technology a development of the
was presented by
traditional
A.R. Garciacane
et al.in
fororder to provide
the mobility aid an accurate
to the blind detection of ELC
people [35]. the objects that are around
is a development of thethe user. A
traditional
small grip of the cane shown in Figure 16 consists of an embedded electronic circuit that includes an
ultrasonic sensor for detection process, micro-motor actuator as the feedback interface, and a 9 V
battery as a power supplier. This grip is able to detect the obstacles above the waistline of the blind
person. A tactile feedback through a vibration will be produced as warning to a close obstacle. The
frequency of the feedback will be increased as the blind person gets closer to the obstacle. Figure 17
Figure 15. (a) The results of the devices orientation in residential area; (b) The results of the devices
Figure 15. (a)
orientation in The results
civilian [34].of the devices orientation in residential area; (b) The results of the devices
orientation in civilian [34].
ELC
ELC
SensorsThe
2017,proposed
17, 565 electronic long cane (ELC) is based on haptics technology which was presented 13 of 42
The Garcia
by A.R. proposed electronic
et al. for the long caneaid
mobility (ELC) is based
to the blindon haptics
people technology
[35]. ELC is awhich was presented
development of the
by A.R. Garcia et al. for the mobility aid to the blind people [35]. ELC
traditional cane in order to provide an accurate detection of the objects that are around the user. is a development of theA
traditional
cane in ordercaneto in order
provide to
an provide
accurate an accurate
detection ofdetection
the objectsof the
that objects
are that
around
small grip of the cane shown in Figure 16 consists of an embedded electronic circuit that includes are
the around
user. A the user.
small A
grip
an
small
of the grip
cane of the
shown cane
in shown
Figure 16in Figure
consists 16
of consists
an of
embedded an embedded
electronic electronic
circuit
ultrasonic sensor for detection process, micro-motor actuator as the feedback interface, and a 9 V that circuit
includesthat
an includes
ultrasonican
ultrasonic
sensor
batteryforas asensor
powerfor
detection detection
process,
supplier. process,
grip is micro-motor
micro-motor
This actuator
able to detectasactuator as theabove
the obstacles
the feedback feedback
interface, interface,
and a 9 Vofand
the waistline the ablind
battery 9 as
V
abattery
power as a power
supplier. supplier.
This grip This
is grip
able to is able
detect to
thedetect the
obstacles obstacles
above theabove the
waistline
person. A tactile feedback through a vibration will be produced as warning to a close obstacle. The waistline
of the of
blind the blind
person.
Aperson.
tactileA
frequency tactile
feedback feedback
through will
of the feedback athrough
vibration a vibration willblind
will beasproduced
be increased the be produced
asperson
warning astowarning
gets to a close
a closetoobstacle.
closer the obstacle.
The
obstacle. FigureThe
frequency 17
frequency
of the of
feedback the feedback
will be will
increased be increased
as the blind as the
person blind
gets person
closer gets
to the closer to
obstacle.
shows how the ELC could help the blind people in detecting the obstacle above his waistline, which the obstacle.
Figure 17 Figure
shows how17
shows
the ELC how
couldthe ELC
help could
the blind help the
people blind
in people
detecting in
the detecting
obstacle the
aboveobstacle
his
is considered as one of the reasons to a serious injury for those who are visually-impaired or above
waistline, his waistline,
which is which
considered
is
as considered
one as onetoofa serious
of theblind.
completely reasons the reasonsinjurytofora those
seriouswho injury for those who are
are visually-impaired visually-impaired
or completely blind. or
completely blind.

Figure 16.
Figure 16. The
The prototype
prototype of
of grip
grip [35].
[35].
Figure 16. The prototype of grip [35].

Figure 17. The proposed device for enhanced spatial sensitivity [35].
Figure 17. The proposed device for enhanced spatial sensitivity [35].
Figure 17. The proposed device for enhanced spatial sensitivity [35].

The ELC were tested on eight of voluntarily blind people. Physical obstacle, information obstacles,
cultural obstacles are the main tested categories for the obstacles classification. The results were
classified based on a taken quiz by the blind people who used the device. The results showed
the efficiency of the device for physical obstacles detection above the waistline of the blind person.
However, the device helps a blind person just in detecting obstacles but not in the orientation function.
So, the blind person still needs to identify his path himself and relies on the tradition cane for the
navigation as shown in Figure 17.
Cognitive Guidance System (CG System)
Landa et al. proposed a guidance system for blind people through structured environments [36].
This design uses Kinect sensor and stereoscopic vision to calculate the distance between the user and
the obstacle with help of fuzzy decision rules type Mandani and vanishing point to guide the user
through the path.
The proposed system consists of two video cameras (Sony 1/3 progressive scan CCD) and one
laptop. The analysis of detection range is beyond 4 m; which was obtained using stereoscopy and
Kinect to compress the cloud of 3D points in range within 40 cm to 4 m in order to calculate the
vanishing point. The vanishing point is used in this system as a virtual compass to direct the blind
person through structured environment. Then, fuzzy decision rules are applied to avoid the obstacles.
In a first step, the system scans for planes in range between 1.5 m and 4 m. For better performance,
the system processes 25 frames per second. Then the Canny filter is used for edges detection. After the
Sensors 2017, 17, 565 14 of 42

edges are defined, the result is used for calculating the vanishing point. Next, the device gets the 3D
Euclidean orientation from the Kinect sensor which is projected to 2D image. That gives the direction
to the goal point.
This work implemented 49 fuzzy rules which cover only 80 configurations. Moreover, the
vanishing point can be computed only based on existing lines which rarely exist in outdoor. That
emphasizes the system is not affordable for outdoor use. The perception capacities of the system need
to be increased to detect spatial landmarks as well.
Ultrasonic Cane as a Navigation Aid (UltraCane)
Development to C-5 laser cane [37], Krishna Kumar et al. deployed an ultrasonic based cane to
aid the blind people [38]. The aim of this work is to replace the laser with the ultrasonic sensors to
avoid the risk of the laser. This cane is able to detect the ground and aerial obstacles.
The prototype of this device as shown in Figure 18a is based on a light weight cane, three ultrasonic
trans-receivers, X-bee-S1 trans-receiver module, two Arduino UNO microcontrollers, three LED panels,
and pizeo buzzer. The target of the three ultrasonic sensors is to detect the ground and aerial obstacles
in range of 5 cm to 150 cm. Figure 18b shows the process of the object detection within a specific
distance. Once an ultrasonic wave is detected, a control signal is generated and it triggers the echo
pin of the microcontroller. The microcontroller records the width of the time duration of the height of
each pin and transforms it to a distance. The control signal will be wirelessly transferred by X-bee to
the receiving unit which would be worn on the shoulders. The buzzer will be played to alert the user
based on the
Sensors 2017, 17, obstacles
565 approach (high alert, normal alert, low alert and no alert). 14 of 41

(a) (b)

Figure 18.
Figure 18. (a)
(a) The
Theprototype
prototypeofofthe
thedevice;
device;(b)
(b)Detection
Detectionprocess of of
process thethe
obstacle from
obstacle 5 cm
from to 150
5 cm cm
to 150
[38].
cm [38].

Obstacle Avoidance Using Auto-adaptive Thresholding (Obs Avoid using Thresholding)


The authors claimed that this device can be a navigational aid to the blind people. However,
An obstacle
the results showed avoidance
it is only system
an objectusing Kinect
detector depth
within camera
a small for blind
range. Also, people
detectionwas
of presented
the dynamic by
Muhamad and Widyawan [39]. The prototype of the proposed system is shown in
object was not covered in this technique which may led to an accident. In order to improve this work,Figure 19a. The
auto-adaptive thresholding
tele-instructions is used
should be giving to detect
to the and
user for calculate aid
navigation the as
distance
well asbetween the obstacle
the integration of GPSand the
which
user.
is The to
needed notebook with
allocate the USBposition.
users hub, earphone, and Microsoft Kinect depth camera are the main
components of the system.
Obstacle Avoidance Using Auto-adaptive Thresholding (Obs Avoid using Thresholding)
The raw data (depth information about each pixel) is transferred to the system by the Kinect. To
increase the efficiency,
An obstacle the range
avoidance of ausing
system depthKinect
close to 800 mm
depth and for
camera more thanpeople
blind 4000 mmwaswill be resetby
presented to
zero. Then, and
Muhamad the depth image [39].
Widyawan will beThe
divided to three
prototype ofareas (left, middle,
the proposed and right).
system The in
is shown auto-adaptive
Figure 19a.
threshold
The generatesthresholding
auto-adaptive the optimal threshold
is used tovalue
detectfor each
and area. Each
calculate the 2distance
2 pixelbetween
area, there
the will be 1
obstacle
pixelthe
and that is going
user. to be used.
The notebook withThen, this group
USB hub, of data
earphone, andwill be classified
Microsoft and transformed
Kinect depth camera are theto depth
main
histogram. Contrast
components function will calculate a local maximum for each depth as shown in Figure 19b.
of the system.
Otsu method will be applied to find the most peak threshold value [40]. Then, an average function
will determine the closet object for each area afterwards. Beeps will be generated through earphone
when the obstacle is in a range of 1500 mm. As it reaches 1000 mm, the voice recommendation will
be produced to the blind person, so, he/she takes left, middle, or right path. The low accuracy of
Kinect in closed range could reduce the performance of the system. Also, the results show the auto-
adaptive threshold cannot differentiate between the objects as the distance between the user and
obstacle increases.
Obstacle Avoidance Using Auto-adaptive Thresholding (Obs Avoid using Thresholding)
An obstacle avoidance system using Kinect depth camera for blind people was presented by
Muhamad and Widyawan [39]. The prototype of the proposed system is shown in Figure 19a. The
auto-adaptive thresholding is used to detect and calculate the distance between the obstacle and
Sensors 2017, 17, 565
the
15 of 42
user. The notebook with USB hub, earphone, and Microsoft Kinect depth camera are the main
components of the system.
The raw data data (depth
(depthinformation
informationabout abouteach
eachpixel)
pixel)is is transferred
transferred to to
thethe system
system by by
thethe Kinect.
Kinect. To
To increase the efficiency, the range of a depth close to 800 mm and
increase the efficiency, the range of a depth close to 800 mm and more than 4000 mm will more than 4000 mm will be reset to
zero.
zero. Then, the depth image will be divided to three areas (left, middle, and right). The auto-adaptive
threshold generates
generates the the optimal
optimalthreshold
thresholdvaluevaluefor foreach
eacharea.
area.Each
Each2 22pixel
2 pixel
area,area,
therethere
will bewill1
be 1 pixel that is going to be used. Then, this group of data will be classified
pixel that is going to be used. Then, this group of data will be classified and transformed and transformed to depth
histogram. Contrast function will calculate a local maximum for each depth as shown in Figure 19b.
Otsu method will be applied applied to find
find the
the most
most peak
peak threshold
threshold value
value [40].
[40]. Then,
Then, anan average
average function
function
will
will determine
determine the the closet
closet object
object for
for each
each area
area afterwards.
afterwards. Beeps will be be generated
generated through
through earphone
earphone
when
when the obstacle
obstacle is is in
inaarange
rangeofof1500
1500mm. mm.AsAs it reaches
it reaches 1000
1000 mm, mm,
the the
voicevoice recommendation
recommendation will
will be produced
be produced to theto the
blindblind person,
person, so, so, he/she
he/she takes
takes left,left, middle,
middle, or right
or right path.
path. TheThelowlow accuracy
accuracy of
of Kinect
Kinect in closed
in closed rangerange could
could reduce
reduce the the performance
performance of system.
of the the system.
Also,Also, the results
the results showshow the
the auto-
auto-adaptive
adaptive threshold threshold
cannotcannot differentiate
differentiate between
between thethe objects
objects asas
thethedistance
distancebetween
betweenthe the user and
obstacle
obstacle increases.
increases.

(a) (b)

Figure 19. (a) The prototype of the proposed system; (b) calculating threshold value and the distance
Figure 19. (a) The prototype of the proposed system; (b) calculating threshold value and the distance
of the closest object [39].
of the closest object [39].

Obstacle Avoidance Using Haptics and a Laser Rangefinder (Obs Avoid using Haptics&Laser)
Using a laser as a virtual white cane to help blind people was introduced by Daniel et al. [41].
The environment is scanned by a laser rangefinder and the feedback is sent to the user via a haptic
interface. The user will be able to sense the obstacle several meters away with no physical contact. The
length of the virtual cane can be chosen by the user, but it is still limited. A laptop type MSI with intel
core i7-740 QM, a laser rangefinder type SICK, an NVIDIA graphic card type GTX460M, and a haptic
display type Novint Falcone are the main components of the proposed systems, which are structured
on an electronic wheelchair. The developed software used an open source platform H3DAPI [42].
The wheel chair will be controlled by Joystic using right hand and sensing the environment will
be controlled by Falcon (haptic interface) using the other hand as shown in Figure 20. As the user
starts the system, the range finder will start scanning the environment that is in front of the chair.
Then, it will calculate the distance between the user and the object using the laser beams. The distance
information will be transmitted to the laptop to create a 3-dimensional graph using NIVIDA card and
then transmit it to the haptic device.
The wheel chair will be controlled by Joystic using right hand and sensing the environment will
be controlled by Falcon (haptic interface) using the other hand as shown in Figure 20. As the user
starts the system, the range finder will start scanning the environment that is in front of the chair.
Then, it will calculate the distance between the user and the object using the laser beams. The distance
information
Sensors 2017, 17,will
565 be transmitted to the laptop to create a 3-dimensional graph using NIVIDA card and
16 of 42
then transmit it to the haptic device.

Figure 20. Display the proposed system mounted on the special electronic wheelchair [41].
Figure 20. Display the proposed system mounted on the special electronic wheelchair [41].

The
The results
results showed
showed that
that the
the precise
precise location
location of
of obstacles
obstacles and
and angles
angles were
were difficult
difficult to
to determine
determine
due
due to misunderstanding of the scale factor between the real and model world by the user of
to misunderstanding of the scale factor between the real and model world by the user of haptic
haptic
grip translation.
grip translation.
AAComputer
ComputerVision
VisionSystem
Systemthat
thatEnsure
Ensurethethe
Autonomous Navigation
Autonomous (ComVis
Navigation Sys)
(ComVis Sys)
AA real
realtime
timeobstacle
obstacledetection
detection system
system waswaspresented
presented in [43] to alert
in [43] the blind
to alert people
the blind and aid
people and them
aid
in their mobility indoors and outdoors. This application works
them in their mobility indoors and outdoors. This application works on a smartphone that is attached on a smartphone that is attached on
the blind persons chest. Furthermore, this paper focuses on a static
on the blind persons chest. Furthermore, this paper focuses on a static and dynamic objects detection and dynamic objects detection
and
and classification
classification
Sensors
technique which was introduced in [44].
2017, 17, 565 technique which was introduced in [44]. 16 of 41
Using
Using detection technique
detection technique in in [44],
[44], the
the team
team was was able
able to todetect
detectboth
bothstatic/dynamic
static/dynamic objects objects in in aa
video stream.
videoNow,
stream.each The
The interested
image is divided
interested points
points which
towhich
blocksare are
that the pixels
thecreated
pixels by that
thatHOG located
located in
andinthen a cells
included
a cells center
center into
of theof the
the image
training
image are
are selected
dataset and based
mapped on image-grid.
to related visual Then, the
word. multiscale
At the end, Lucas-Kanade
they
selected based on image-grid. Then, the multiscale Lucas-Kanade algorithm tracks these selected used SVM algorithm
classifier tracks
for these
training. selected
So, each
points. Afteristhat,
labeled After
points. data that, they applied
transmitted
they applied
to the the the RANSAC
classifier
RANSAC algorithm
to bealgorithm
differentiated on these
on these
basedpoints reclusively
on specific
points to
to detect
categories.
reclusively detect thethe
background motion.
The implementation
background The
motion. The of number of
the system
number clusters are created
into smartphone
of clusters are createdto merge the
is considered outlines
to merge the afterwards.
as aoutlines
great mobility The
afterwards. distance
aid toThe
the
between
blind the
people
distance between object
since and
the video
smartphones camera defines
nowadays theare state
light of andthe object
easy to either
carry.
object and video camera defines the state of the object either as normal or as
Also,normal
using or
HOG urgent.
descriptor
The adapted
to extract
urgent. the feature HOG (Histogram
of each set of images of Oriented
makes the Gradients)
recognition descriptor was used
process efficient as as
therecognition
system not
algorithm
only The
detectsthat
theisobject,
adapted integrated
HOGbut with
also
(Histogram the framework
recognizes it based
of Oriented BoVWon its(Bag
type of
Gradients) Visual
using the Words).
descriptor clusters.
was used However, the sizes
as recognition
of images
algorithm are
However, resizable
that isthe based
fixed sizes
integrated with on
of the the object
the framework type
image which that
BoVW the
is based team
(Bagon decided.
of the category,
Visual Then,
Words). can they computed
the sizesthe
make detecting
However, the
of
descriptor
same object
images on
are withthe
resizableextracted
a different interested
basedsize on athe points
challenge.
object type of each
Objects group
thatinthedark of images
places
team and
and those
decided. then
Then, make
thatthey clusters
are highly
computed which
dynamic the
contain
cannot the
descriptorbe onextracted
detected. features
Smartphone
the extracted of allvideos
interested images. After
are
points ofnoisythat,as
each they
well.
group applied
ofInimagesBoVW
addition,andtothe create
then tested
makea codebook
dataset for
clustersofwhich all
4500
clusters
images with
contain (K): W = { w
dictionary
the extracted , w ,
of .
2 4000
1 features . . .., w
ofwords
all } . Each w is
is considered
k images. a visual
After that, word
astheysmall that represents
dataset.
applied BoVW the
The system systems
to createis atestedvocabulary.
codebookand canfor
The work
workflow
onlyclusters
all aisSamsung
on(K): illustrated
= , in,
S4. Figure
. . , 21. . Each w is a visual word that represents the systems
vocabulary. The work flow is illustrated in Figure 21.

Figure 21. The process of detection and recognition algorithm [43].


Figure 21. The process of detection and recognition algorithm [43].

Silicon Eyes (Sili Eyes)


Now, each image is divided to blocks that created by HOG and then included into the training
By adapting
dataset and mapped GSMto and GPS coordinator,
related visual word.Prudhvi
At the et end,
al. introduced anSVM
they used assistive navigator
classifier for blind
for training.
people
So, eachin [45]. Itdata
labeled helps the users detect
is transmitted to thetheir current
classifier location,
to be hence, navigating
differentiated them using
based on specific haptic
categories.
feedback. In addition, the user can get information about time, date and even the color of the objects
in front of him/her in audio format. The proposed device is attached within a silicon glove to be
wearable as shown in Figure 22.
The prototype of the proposed device is based on a microcontroller which is 32-bit cortex-M3 to
control entire system, a 24-bit color sensor to recognize the colors of the objects, light/temperature
cannot be detected. Smartphone videos are noisy as well. In addition, the tested dataset of 4500
images with dictionary of 4000 words is considered as small dataset. The system is tested and can
only work on a Samsung S4.

Sensors 2017, 17, 565 17 of 42

The implementation of the system into smartphone is considered as a great mobility aid to the
blind people since the smartphones nowadays are light and easy to carry. Also, using HOG descriptor
to extract the feature of each set of images makes the recognition process efficient as the system not
only detects the object, but also recognizes it based on its type using the clusters.
However, the fixed sizes of the image which is based on the category, can make detecting the
same object with a different size a challenge. Objects in dark places and those that are highly dynamic
cannot be detected. Smartphone videos
Figure 21. The areofnoisy
process as well.
detection In addition,
and recognition the tested
algorithm [43]. dataset of 4500 images
with dictionary of 4000 words is considered as small dataset. The system is tested and can only work
Silicon Eyes (Sili Eyes)
on a Samsung S4.
By adapting GSM and GPS coordinator, Prudhvi et al. introduced an assistive navigator for blind
Silicon Eyes (Sili Eyes)
people in [45]. It helps the users detect their current location, hence, navigating them using haptic
feedback.
By In addition,
adapting GSM and theGPS
usercoordinator,
can get information
Prudhviabout
et al.time, date and an
introduced even the color
assistive of the objects
navigator for blind
in front of him/her in audio format. The proposed device is attached within
people in [45]. It helps the users detect their current location, hence, navigating them a silicon glove to be
using haptic
wearable as shown in Figure 22.
feedback. In addition, the user can get information about time, date and even the color of the objects
The prototype of the proposed device is based on a microcontroller which is 32-bit cortex-M3 to
in front of him/her in audio format. The proposed device is attached within a silicon glove to be
control entire system, a 24-bit color sensor to recognize the colors of the objects, light/temperature
wearable as and
sensor, shown
SONARin Figure 22.the distance between the object and the user.
to detect

Figure 22. The proposed system attached on silicon glove [45].


Figure 22. The proposed system attached on silicon glove [45].
The system supports a touch keyboard using Braille technique to enter any information. After
the user
The choosesofthe
prototype thedesired destination,
proposed device ishe/she
basedwill
on be directed using MEMS
a microcontroller whichaccelerometer and
is 32-bit cortex-M3 to
magnetometer
control through
entire system, the road.
a 24-bit colorThe instructions
sensor will bethe
to recognize sentcolors
through headset
of the that is
objects, connected to
light/temperature
sensor, and SONAR to detect the distance between the object and the user.
The system supports a touch keyboard using Braille technique to enter any information. After
the user chooses the desired destination, he/she will be directed using MEMS accelerometer and
magnetometer through the road. The instructions will be sent through headset that is connected to the
device via MP3 decoder. The user will be notified by SONAR on the detected distance between the
user and closet obstacle. In case of emergency, the current location of the disable user will be sent via
SMS to someone whose phone number is provided by the user using both technologies GSM and GPS.
The design of the system is quite comfortable as it is wearable. Also, the features that are provided
to the user can give him/her more sense to the surrounded environment. However, the system needs
a power tracker to keep a track of the battery. The emergency aid is not powerful as the user needs to
press the button in case of the emergency and she/he has to enter phone numbers of his/her relatives,
which might be a limiting factor. It would be better if the emergency feature was provided using
audio messages.
A Path Force Feedback Belt (PF belt)
A Path Force Feedback belt concept was presented by Oliveira to help blind people navigating
outside through their road [46]. Figure 23 shows the three main components of the force feedback belt
design; these are: the main unit (the process) with two dual video cameras, power supply which is
that gives a feedback to the user. The process unit uses two video cameras to take the video stream
that gives a feedback to the user. The process unit uses two video cameras to take the video stream
and then generates a 3D model of the users surrounding area as shown in Figure 24.
and then generates a 3D model of the users surrounding area as shown in Figure 24.
As the surrounding environment of the user is tracked by the processing unit in 3D model, the
As the surrounding environment of the user is tracked by the processing unit in 3D model, the
main features of the environment such as side walk borders or walls are extracted. In addition, it will
main features of the environment such as side walk borders or walls are extracted. In addition, it will
aid the
Sensors blind
2017, in her/his mobility by sending signals based on the extracted feature to the18force
17, 565 of 42
aid the blind in her/his mobility by sending signals based on the extracted feature to the force
feedback belts corresponded cells. The corresponding cells will be vibrating around the belt and
feedback belts corresponded cells. The corresponding cells will be vibrating around the belt and
show the user the right path. The system is designed such that each feature has its own signature use
show the user the right path. The system is designed such that each feature has its own signature use
packed into a pocket
of the vibration andSo,
pattern. theeach
belt to be wornfrequency
vibration around the users waist.a The
differentiates belt feature
specific has number of cells e.g.,
or obstacle, that
of the vibration pattern. So, each vibration frequency differentiates a specific feature or obstacle, e.g.,
gives a feedback to the user. The process unit uses two video cameras to take
the sidewalks border marked in blue in Figure 24. However, the user needs to be trained tothe video stream and
the sidewalks border marked in blue in Figure 24. However, the user needs to be trained to
then generates
distinguish theameaning
3D modelof of theor
each users surrounding
multiple area as shown in Figure 24.
of frequencies.
distinguish the meaning of each or multiple of frequencies.

Figure 23. The


The prototype
prototype of Path Force
of Path Force Feedback
Feedback belt design
design [46].
Figure 23. Force Feedback belt
belt design [46].
[46].

Figure 24. The detection process of force feedback belt [46].


Figure 24.
Figure 24. The
The detection
detection process
process of
of force
force feedback
feedback belt
belt [46].
[46].

As the surrounding environment of the user is tracked by the processing unit in 3D model, the
main features of the environment such as side walk borders or walls are extracted. In addition, it
will aid the blind in her/his mobility by sending signals based on the extracted feature to the force
feedback belts corresponded cells. The corresponding cells will be vibrating around the belt and show
the user the right path. The system is designed such that each feature has its own signature use of the
vibration pattern. So, each vibration frequency differentiates a specific feature or obstacle, e.g., the
sidewalks border marked in blue in Figure 24. However, the user needs to be trained to distinguish
the meaning of each or multiple of frequencies.
Using a 3D model within a sliding volume with continuous updating in this system provides a
better and faster process of features extraction especially over the buildings and other important and
urgent objects. At the same time, it can reduce the main memory consumption. Otherwise, collision
awareness will perform in case of the system was disable to capture the object such as the floor.
The detection range for this design is too small as the system extracts the features of only the
closest objects as explained in the paper. The blind person needs to be familiar with the surrounding
area to have a proper reaction. Also, using the vibration patterns as feedback instead of audio format
is not an excellent solution as the person can lose the sense of discrimination of such technique over
the time; especially because there are multiple vibrations that need to be known by the user.
The detection
urgent objects. At range for this
the same time,design is too small
it can reduce the main as memory
the system extracts the
consumption. featurescollision
Otherwise, of only the
closest objects as
awareness willexplained
perform inincasetheof paper. The blind
the system persontoneeds
was disable capture tothe
be object
familiar
suchwith thefloor.
as the surrounding
area to haveThe adetection
proper reaction.
range for Also, usingisthe
this design toovibration patterns
small as the systemasextracts
feedback theinstead
featuresofofaudio format
only the
is notclosest objects as
an excellent explained
solution as in
thethe paper. can
person The lose
blindthe
person
sense needs to be familiar with
of discrimination the surrounding
of such technique over
area to
Sensors
the time; have
2017, 17,a565
especially proper reaction.
because thereAlso, using the vibration
are multiple vibrations patterns as feedback
that need insteadby
to be known of the
audio format
19 of 42
user.
is not an excellent solution as the person can lose the sense of discrimination of such technique over
FingerReader
the time; especially and Eye Ringthere are multiple vibrations that need to be known by the user.
because
FingerReader and Eye Ring
A supportive
FingerReader reading
and Eyesolution
Ring for blind people called FingerReader was introduced by Shilkrot
A supportive reading solution for blind people called FingerReader was introduced by
et al. to aid disabled
A supportive
people in reading printed texts with FingerReader
a real time response [47]. This device is a
Shilkrot et al. to aidreading
disabledsolution
people inforreading
blind people
printedcalled
texts with a real time was introduced
response by Shilkrot
[47]. This device
wearable
et al. device
to aid on the people
disabled index finger
in for close
reading uptexts
printed scanning.
with a So, the
real timedevice
response scans
[47].the
Thisprinted
device text
is a one
is a wearable device on the index finger for close up scanning. So, the device scans the printed text
line one
at the
linetime,
wearable the then
atdevice time, the
on the
then response
index fingercomes
the response forcomesin in
close tactile
up feedback
scanning.
tactile So, the
feedback and
and audio
device
audio scansformat. FingerReader
the printed
format. text one
FingerReader is is
continuous
line at the
continuous work to EyeRing
time,
work then
to the which
EyeRing whichwas
response presented
comes
was in [48]
in tactile
presented in [48] fordetecting
feedback
for detecting
and audio a particular
format.object
a particular object
onceonce
FingerReader at the
is
at the
timetime
by pointing
by pointing and then scanning that item using the camera on the top of the ring as shown in in
continuous and
work to then scanning
EyeRing which that
was item using
presented in the
[48] camera
for on
detecting the
a top of
particularthe ring
object as
once shown
at the
time
Figure 25.by25.
Figure pointing and then scanning that item using the camera on the top of the ring as shown in
Figure 25.

(a)(a) (b)
(b)
Figure 25. (a) The prototype of the EyeRing; (b) The process of EyeRing device of detecting and
Figure 25. (a) The prototype of the EyeRing; (b) The process of EyeRing device of detecting and
Figure 25. (a) The
interaction prototype
application [48]. of the EyeRing; (b) The process of EyeRing device of detecting and
interaction application [48].
interaction application [48].
In this design, two vibration motors with additional
additional multimodal feedback, dual material case
In this
for moredesign,
comforttwo vibration
around motors
the finger, andwith
highadditional
resolution multimodal
video streamfeedback, dual material
are the expanding of the case
for more comfortdevice
FingerReader around the finger,
as shown and26.
in Figure high
The resolution video
haptic feedback wasstream are tothe
provided expanding
guide of the
the user to
FingerReader device
where he/she as
should shown
move in
the Figure
camera.
he/she should move the camera. 26. The haptic feedback was provided to guide the user to
where he/she should move the camera.

Figure 26. The


Figure 26. The prototype of FingerReader
prototype of FingerReader [47].
[47].

The team used Text Extraction Algorithm that is integrated with Flite Text-To-Speech [49] and
Figure 26.Algorithm
The team used Text Extraction The prototype of integrated
that is FingerReader [47].
with Flite Text-To-Speech [49] and
ORC [50]. The proposed algorithm extracts the printed text though close-up camera. Then, it
ORC [50]. The proposed algorithm extracts the printed text though close-up camera. Then, it matches
Thepruned
the team used
curvesText
withExtraction Algorithm
the lines. The duplicatedthat is integrated
words with by
will be neglected Flite
2DText-To-Speech [49] and
histogram. After that,
ORC [50]. Thewill
the algorithm proposed algorithm
define the words fromextracts theand
characters printed
send ittext though
to ORC. Thoseclose-up
detected camera.
words willThen,
be it
saved in a template as the user continues to scan. Hence, those words will be tracked by the algorithm
for any match. The user will receive an audio and haptic feedback whenever he/she sidetracks the
current line. Furthermore, the user will receive signals through the haptic feedback to inform her/him
Sensors 2017, 17, 565 19 of 41
Sensors 2017, 17, 565 19 of 41
matches the pruned curves with the lines. The duplicated words will be neglected by 2D histogram.
After
matches that,
thethe algorithm
pruned curveswill define
with the words
the lines. from characters
The duplicated wordsandwillsend it to ORC.
be neglected byThose detected
2D histogram.
words will17,
After that,
Sensors 2017, be565
the saved in a template
algorithm will define as the
the words
user continues to scan. and
from characters Hence,sendthose
it towords will bedetected
ORC. Those tracked
20 of 42
by the algorithm for any match. The user will receive an audio and haptic feedback
words will be saved in a template as the user continues to scan. Hence, those words will be tracked whenever he/she
sidetracks the current
by the algorithm line.
for any Furthermore,
match. The user the willuser willan
receive receive
audiosignals through
and haptic the haptic
feedback feedback
whenever he/she to
about
inform the end of about
her/him the line if the
the end system
of the did not
linethe findsystem
if user
the any more
did printed
not findtext
anyblocks.
more Figure 27text
shows the
sidetracks the current line. Furthermore, will receive signals through theprinted blocks.
haptic feedback to
extraction
Figure and detection process of the system.
inform 27 shows about
her/him the extraction
the end and
of thedetection process
line if the of the
system didsystem.
not find any more printed text blocks.
Figure 27 shows the extraction and detection process of the system.

Figure 27. The process of the extraction and detection of printed text line [47].
Figure 27. The process of the extraction and detection of printed text line [47].
Figure 27. The process of the extraction and detection of printed text line [47].
The device was tested on four users after individual training which lasted 1 h. The feedback of
The
The device
the users indicated
device was tested
wasthat on
on four
the haptic
tested users
users after
fourfeedback wasindividual
after more efficient
individual training
thanwhich
training lasted
the audio
which lasted 11 h.
h. The
response The feedback
regarding
feedbacktheof
of
the users
directions. indicated
the users indicated that
In addition, the
that there haptic feedback
was afeedback
the haptic long stop was
was more
between efficient
each word
more efficient than
than the
which audio response
confuses
the audio regarding
the user
response the
regarding
regarding the
directions.
what he/sheIn
directions. Inshould
addition, there was
do there
addition, after. was aa long
However,longthestop
stop between
idea of the each
between system
each word
wordis awhich confuses
confuses the
great supportive
which the user
user regarding
reading solution
regarding
what
for
what he/she
blind should
people.
he/she do after. However, the idea of the system is a great supportive
should do after. However, the idea of the system is a great supportive reading solution reading solution
for
for blind
blind people.
people.
Navigation Assistance Using RGB-D Sensor with Range Expansion (Nav RGB-D)
Navigation
NavigationAssistance
AssistanceUsing
UsingRGB-D
RGB-DSensorSensorwithwithRange
RangeExpansion
Expansion(Nav (NavRGB-D)
RGB-D)
An assistive navigator integrated both range and visual information was introduced by A.
AladrenAn
An assistive
et al. to navigator
assistive help blindintegrated
navigator people toboth
integrated range
navigate
both and visual
indoor
range and areasinformation was introduced
[51].information
visual This proposed wasdevice bycan
introduced A. be
Aladren
more
by A.
et
thanal. ato help
navigator blind
for people
blind to navigate
people; it can indoor
be a areas
light flash [51].
for This
anyone proposed
in dark
Aladren et al. to help blind people to navigate indoor areas [51]. This proposed device can be more device
places. can
This be more
device than
containsa
navigator
two
thanparts: for blind
one isfor
a navigator people;
RGB-D
blindto it can be
obtainitthe
people; a light
cancolor flash for
and range
be a light anyone
flash for in dark
information places.
anyone in between This device
twoThis
dark places. sensors contains
device using two
both
contains
parts:
infrared onetechnology
two parts: isone
RGB-D to
is RGB-D obtain
and the color
density
to obtain theand
images. range
The
color information
device
and range wornbetween
is information two sensors
on the between
users twoasusing
neck both infrared
it is illustrated
sensors using in
both
technology
Figure 28 andand density
which images.
connected The
with device
a laptop is worn
that is on the
packed users
in a neck
bag. as
infrared technology and density images. The device is worn on the users neck as it is illustrated init is illustrated in Figure 28
and which
Figure 28 andconnected with a laptop
which connected withthat is packed
a laptop that in a bag. in a bag.
is packed

Figure 28. The proposed device [51].


Figure 28. The proposed device [51].
Figure 28. The proposed device [51].
This work tries to overcome the limitation of range information by using vision computing
techniques for further
This work tries todetection.
overcomeThree steps willoftake
the limitation place
range in this flow
information by work
usingafter capturing
vision the
computing
This
image work
the tries
RGB-D. to
Theovercome
3D point the
was limitation
used to of range
extract the information
main features by using
and filtervision
all computing
techniques for further detection. Three steps will take place in this flow work after capturing are
by points that the
techniques
image by theforRGB-D.
furtherThe
detection.
3D pointThree stepstowill
was used takethe
extract place in features
main this flowand
work after
filter capturing
all points that the
are
image by the RGB-D. The 3D point was used to extract the main features and filter all points that are
the detection algorithm, which is used to avoid any outliners as follows:
log(1 )
= (2)
(1 (1 ) )
In equation 2, the number of solutions in a space is m, and the dimension of the model is p, the
Sensors 2017, 17,of565
probability computation success is P and the outliners percentage is , in case of failure. 21 of 42
These
two steps will be repeated reclusively until they get the least number of points. Once the system
reaches the step of classifying the object either floor or obstacle; then, the vision information technique
represented in each cube of taken image to be one point. RANdom Sample Consensus (RANSA) is the
starts to analyze the extracted cloud points based on the feature of the light, geometry and hue using
detection algorithm, which is used to avoid any outliners as follows:
the shift mean algorithm as shown in Figure 29. Based on the comparison of each extracted pixel for
satisfying the similarity of above principles, they logwill
(1 beP)classified under Floor-seed category.
m= (2)
Then, they applied both the probabilistic(1Hough (1 Line
) p)Transform and Canny edge detector [52]
to generate board line between obstacles and floors which will be represented in polygons. Hence,
basedInonequation
the floor2, the number
division, of solutions
each region willin abespace is m, and
identified the dimension
as either being floorof the
or model is p, the
not. When the
probability of computation success is P and the outliners percentage is , in case
number of the extracted lines in the comparison is too low or too high, the watershed segmentation of failure. These two
steps
will bewill be repeated reclusively until they get the least number of points. Once the system reaches
needed.
the step of classifying
The system the aobject
shows eitherperformance
positive floor or obstacle; then, places
in small the vision
by information
integrating technique starts to
both probabilistic
analyze the extracted cloud points based on the feature of the light, geometry
Hough Line Transform and Canny edge detector to classify the object as either obstacle or floor. and hue using the shift
mean algorithm
However, as shown
the system in Figure
will not provide29.good
Based on the
results comparison
when that placeof has
eacha number
extractedofpixel for satisfying
windows because
the similarity of above principles,
of the infrared sensitivity to sunlight. they will be classified under Floor-seed category.

Figure 29. The process of the extraction and expand the range detection text [51].
Figure 29. The process of the extraction and expand the range detection text [51].

Mobile Crowd Assisted Navigation for the Visually-impaired (Mobile Crowd Ass Nav)
Then, they applied both the probabilistic Hough Line Transform and Canny edge detector [52] to
A webapp
generate overbetween
board line Googleobstacles
engine for smartphones
and floors whichcalled Mobile
will be CrowdinAssisted
represented polygons.Navigation
Hence, basedwas
developed in [53] to navigate the visually-impaired people between two points
on the floor division, each region will be identified as either being floor or not. When the number of the online. The aim of
this framework is to offer to the user accessible, efficient and flexible crowd
extracted lines in the comparison is too low or too high, the watershed segmentation will be needed. services for visually-
impaired people.
The system GPS,a compass,
shows accelerometer
positive performance and places
in small camerabyare used onboard.
integrating The smartphone
both probabilistic Hough
streams the videos and sensory information to crowd server to be used by
Line Transform and Canny edge detector to classify the object as either obstacle or floor. However,the volunteers.
The volunteers
the system feedback
will not provide is gathered
good by thethat
results when Crowdplaceprogram and then
has a number ofthe system because
windows sends theoffinal
the
decision to the blind user
infrared sensitivity to sunlight. through either audio format, vibration or both. The recorded video by the
visual impaired user will be referred as a room and then each feedback of the volunteer will be
weighted
Mobile based
Crowdon Assisted Navigation
the accuracy forinformation.
of the the Visually-impaired
The reason(Mobile
behindCrowd
this Ass Nav)
aggregation process which
is shown in Figure
A webapp over 30Google
is to eliminate
engine forthesmartphones
confliction of called
the received
Mobileinformation
Crowd AssistedaboutNavigation
the same query was
from if there is more than one volunteer or if it comes from a vision algorithms
developed in [53] to navigate the visually-impaired people between two points online. The aim of this machine as is shown
in Figure 31.is to offer to the user accessible, efficient and flexible crowd services for visually-impaired
framework
people.TwoGPS, experiments
compass,were tested to direct
accelerometer the user
and camera arefrom
usedone room toThe
onboard. another using the
smartphone proposed
streams the
webapp and using a simple sum aggregation approach and
videos and sensory information to crowd server to be used by the volunteers. a legion leader approach, each one at the
sameThe time. Another experiment
volunteers feedback is was doneby
gathered onthe
eight blindprogram
Crowd folded participates
and then the over obstacle
system sendspath
theusing
final
the simple sum aggregation approach.
decision to the blind user through either audio format, vibration or both. The recorded video by
the visual impaired user will be referred as a room and then each feedback of the volunteer will be
weighted based on the accuracy of the information. The reason behind this aggregation process which
is shown in Figure 30 is to eliminate the confliction of the received information about the same query
from if there is more than one volunteer or if it comes from a vision algorithms machine as is shown in
Figure 31.
Sensors 2017, 17, 565 22 of 42

Two experiments were tested to direct the user from one room to another using the proposed
webapp and using a simple sum aggregation approach and a legion leader approach, each one at the
same
Sensorstime. Another
2017, 17, 565 experiment was done on eight blind folded participates over obstacle path 21 using
of 41
the simple
Sensors sum
2017, 17, 565aggregation approach. 21 of 41
The framework can be considered as an economical solution for visually-impaired people.
The framework
However, the system can
the system itselfbeneeds
itself considered
needs advancedas an
advanced economical
experiments
experiments andsolution for visually-impaired
evaluation
evaluation considerationpeople.
with consideration of the
However, the system itself needs advanced experiments and evaluation
delay and time alternative of aggregation process as these factors play the main with consideration
main roles of the
roles of the system.
system.
delay
The and time
The authors
authors need
needalternative
tototest
testthe
theof aggregation
volumes of of
volumes dataprocess
that
data can
that as these
be
can factors
received and
be received play the mainand
aggregated
and roles
aggregated how
andof how
the system.
to best
to feed
best
The
this authors
feedinformation need to test the volumes
to thetovisually-impaired
this information of data that
person.
the visually-impaired can
person. be received and aggregated and how to best
feed this information to the visually-impaired person.

Figure 30. The implemented app [53].


Figure 30. The
Figure 30. The implemented
implemented app
app [53].
[53].

Figure 31. The proposed applications dataflow [53].


Figure 31. The proposed applications dataflow [53].
Figure 31. The proposed applications dataflow [53].
A Design of Blind-guide Crutch Based on Multi-sensors (DBG Crutch Based MSensors)
A Design of Blind-guide Crutch Based on Multi-sensors (DBG Crutch Based MSensors)
ABased
Designon the ultrasonic
of Blind-guide distance
Crutch Based measurement
on Multi-sensors approach,
(DBG Crutcha guidance system for blind people
Based MSensors)
Based on the ultrasonic distance measurement approach, a guidance
was proposed in [54]. The purpose of this system is to help blind people in detecting system for and
blind people
avoiding
was proposed
Based on in
the [54]. The
ultrasonic purpose
distance of this system
measurement is to help
approach, blind
a people
guidance
the obstacles in front, left front, and right front of the user as shown in Figure 32. in detecting
system for and
blind avoiding
people was
the obstacles
proposed in front, left front, and right front of the user as shown in Figure 32.
Figure 33 displays the replacement of the three ultrasonic sensors on the cane. The functionthe
in [54]. The purpose of this system is to help blind people in detecting and avoiding of
Figure
obstacles in 33 displays
front, left the
front, replacement
and right of
front the
of three
the user ultrasonic
as shown sensors
in Figureon the
32. cane.
these sensors is to collect the distance information from different ranges; the top sensor is used for The function of
these sensors
detecting is to collect
the overhead the distance
obstacle and theinformation
other two are fromuseddifferent ranges;front
for detection the top sensorInisaddition,
obstacles. used for
detecting
ultrasonicthe overhead obstacle
transmitting and the
and receiving other two
modules, are used
voice for detection
and vibration front and
modules obstacles.
the keyIn to
addition,
switch
ultrasonic transmitting and receiving modules, voice and vibration modules
between the feedback modules are used in this system. The whole system is controlled by and the key to switch
the
between the feedback
STC15F2K60S2 modules are used in this system. The whole system is controlled by the
microcontroller.
STC15F2K60S2 microcontroller.
Sensors 2017, 17, 565 23 of 42

Figure 33 displays the replacement of the three ultrasonic sensors on the cane. The function
of these sensors is to collect the distance information from different ranges; the top sensor is
used for detecting the overhead obstacle and the other two are used for detection front obstacles.
In addition, ultrasonic transmitting and receiving modules, voice and vibration modules and the key
to switch between the feedback modules are used in this system. The whole system is controlled by
the STC15F2K60S2
Sensors 2017, 17, 565 microcontroller. 22 of 41
Sensors 2017, 17, 565 22 of 41

Figure 32. The proposed crutch with displayed detection ranges [54].
Figure
Figure 32. The proposed
32. The proposed crutch
crutch with
with displayed
displayed detection
detection ranges
ranges [54].
[54].

Figure 33. Replacement of three ultrasonic sensors on the cane [54].


Figure 33. Replacement of three ultrasonic sensors on the cane [54].
Figure 33. Replacement of three ultrasonic sensors on the cane [54].
The STC15F2K60S2 MCU controls the signals between ultrasonic Transmit and Receive
modules.The TheSTC15F2K60S2
travelled times MCU need controls the signals
to be recorded between
separately such ultrasonic
as time1, Transmit
time2 and and time3Receive
as the
modules.The STC15F2K60S2
The travelled MCU
times controls
need to the
be signals
recorded between
separately ultrasonic
such as Transmit
time1, and
time2 Receive
and modules.
time3 as the
the
ultrasonic signal is emitted and the echo signals are detected. If the time counter is larger than
The travelled
ultrasonic times
signal is need to be
emitted and recorded
the echo separately
signals aresuch as time1,
detected. If time2
the time and time3isaslarger
counter the ultrasonic
than the
setup threshold, then there are no obstacles presented in that area. Based on the detected distance
signal
setup theis emitted and
threshold, the echoaresignals are detected. If the timethat
counter isBased
largeronthanthethe setup threshold,
from obstaclethenandthere
the sensor, no obstacles
the alarm presented
decisioninmaking area.
algorithm producesdetected distance
the warning
then there
from theeither are no
obstacle obstacles
andorthe presented in
sensor,formation.that area. Based on the detected distance
the alarm decision making algorithm produces the warning from the obstacle
message audio vibration
and
messagethe sensor,
either the
audio alarm
or decisionformation.
vibration making algorithm produces the warning message either audio or
The system was successful in detecting the obstacle in four directions: front, left front, right front
vibration formation.
The system wasthree
successful in detecting
and overhead using sensors. However,the theobstacle
detection in four
range directions:
is small asfront, left front, right
the maximum rangefront
is 2
The system was three
successful in detecting the obstacle in four directions: front, left front, right front
m. Also, the system can be considered as obstacle avoidance system, but not a navigation systemisas2
and overhead using sensors. However, the detection range is small as the maximum range
and overhead
m.isAlso, usingcan
the system threebe sensors.
considered However,
as obstaclethe avoidance
detection range
system, is but
small
notasaregarding
the maximum
navigation range
it claimed. The feedback of this system only consists of warning messages thesystem
obstacleas
is 2 m. Also,
it is claimed. the system
The feedback can be considered
of thisdirections as
system only obstacle avoidance
consistsforward. system, but not a navigation
of warning messages regarding the obstacle system
location and there were no given to proceed
as it is claimed.
location and there Thewere
feedback of this
no given system only
directions consists
to proceed of warning messages regarding the obstacle
forward.
locationUltrasonic Assistive Headset for visually-impaired
and there were no given directions to proceed forward. people (Ultra Ass Headset)
Ultrasonic Assistive Headset for visually-impaired people (Ultra Ass Headset)
An assistive headset was proposed in [55] to navigate visually-impaired people based on the
An assistive
ultrasonic distanceheadset was proposed
measurement in [55]
technology. to navigate
Figure visually-impaired
34 illustrates the design of the people basedheadset
ultrasonic on the
ultrasonic distance measurement technology. Figure 34 illustrates the design
which contains four ultrasonic sensors; two sensors cover each membrane to detect left and right of the ultrasonic headset
which contains
obstacles. four ultrasonic
DYP-ME007 is the chosensensors;
typetwo sensors cover
of ultrasonic sensoreachformembrane to detect left and
a distance measurement. right
ISD2590
obstacles. DYP-ME007 is the chosen type of ultrasonic sensor for a distance
recording storage is used to record the recommended directions. There are six recorded messages, measurement. ISD2590
recording
the selected storage is usedistobased
information recordonthe therecommended
intersection of directions. There sensors
two ultrasonic are six recorded messages,
in case there is an
Sensors 2017, 17, 565 24 of 42

Ultrasonic Assistive Headset for visually-impaired people (Ultra Ass Headset)


An assistive headset was proposed in [55] to navigate visually-impaired people based on the
ultrasonic distance measurement technology. Figure 34 illustrates the design of the ultrasonic headset
which contains four ultrasonic sensors; two sensors cover each membrane to detect left and right
obstacles. DYP-ME007 is the chosen type of ultrasonic sensor for a distance measurement. ISD2590
recording storage is used to record the recommended directions. There are six recorded messages, the
selected information is based on the intersection of two ultrasonic sensors in case there is an obstacle.
The function of this system is as follows: each sensor has an ID which is produced as a binary
code.
SensorsOnce the565
2017, 17, sensor receives a reflection of the ultrasonic wave, an output of 1 will be sent to the41
23 of
microcontroller,
Sensors 2017, 17, 565 otherwise 0 will be sent. Using the binary code, the microcontroller can determine23 of 41
which sensor is the receiver. Based on that, the audio feedback will be played back to the user. Figure
which sensor is the receiver. Based on that, the audio feedback will be played back to the user. Figure 35
which
shows
35 showssensor
the the is
completedthe receiver.
completeddesign Based on that,
of proposed
design the audio
system.
of proposed system. feedback will be played back to the user. Figure
35 shows the completed design of proposed system.

Figure34.
Figure 34.The
Thedesign
designof
ofultrasonic
ultrasonicheadset
headset[55].
[55].
Figure 34. The design of ultrasonic headset [55].

(a) (b)
(a) (b)
Figure 35. (a,b) Display the proposed ultrasonic headset with illustrating of the circuit and the solar
Figure 35.
panels35. (a,b) Display the proposed ultrasonic headset with illustrating of the circuit and the solar
[55].
Figure (a,b) Display the proposed ultrasonic headset with illustrating of the circuit and the solar
panels [55].
panels [55].
The system is a good energy saving solution. However, the system is limited in the directions it
The system
provides to theis isuser.
a good energy
Sixenergy saving
directions solution.
cannot However,enough
be sufficient the system is limited
to guide inuser
the in the indoors
directions it
and
The system
provides to the a good
user. Six saving
directions cannotsolution.
be However,enough
sufficient the system
to is limited
guide the theindoors
user directions
andit
outdoors.
provides Furthermore,
toFurthermore, the headset
the user. Six directions obscures
cannot the external
be sufficient enoughnoise, which blind
to guide people rely on to make
outdoors.
their decision in case thethe headset
system obscures
fails. the external noise, whichthe userpeople
blind indoors and
rely onoutdoors.
to make
Furthermore,
their decisionthe headset
in case the obscures the external noise, which blind people rely on to make their decision
system fails.
in
caseA the systemDevice
Mobility fails. for the Blind with Improved Vertical Resolution Using Dynamic Vision Sensors
A(MobiDevice
Mobility Device Improvedfor the Blind with Improved Vertical Resolution Using Dynamic Vision Sensors
VerticleResolion)
A(MobiDevice
Mobility Device for the
Improved Blind with Improved Vertical Resolution Using Dynamic Vision Sensors
VerticleResolion)
(MobiDevice Improved VerticleResolion)
Two retina-inspired dynamic vision sensors (DVS) were deployed in [56] to improve the
Two of
mobility retina-inspired
visually-impaired dynamic
people. vision
Figure sensors
36 (DVS) (DVS) the
illustrates were deployed
proposed in [56]
device to beto improve
mounted onthe
the
Two
mobility retina-inspired
of visually-impaired dynamic vision
people. Figuresensors
36 were
illustrates deployed
the proposed in [56]
deviceto improve
to be the
mounted mobility
on the
head
of of the user. Thepeople.
visually-impaired aim of this work
Figure 36isisillustrates
to represent the
thethe information
proposed deviceof the surrounding environment
head
as an ofaudio
the user. The aim
landscape fromof this
the work
simulated to represent
3-D sound, information
for example of to
MP3 the be mounted onenvironment
surrounding
format [57].
the head of
the user. The aim of this work is to represent the information
as an audio landscape from the simulated 3-D sound, for example MP3 format [57]. of the surrounding environment as an
audio landscape from the simulated 3-D sound, for example MP3 format [57].
A Mobility Device for the Blind with Improved Vertical Resolution Using Dynamic Vision Sensors
(MobiDevice Improved VerticleResolion)
Two retina-inspired dynamic vision sensors (DVS) were deployed in [56] to improve the
mobility of visually-impaired people. Figure 36 illustrates the proposed device to be mounted on the
head of
Sensors the17,
2017, user.
565 The aim of this work is to represent the information of the surrounding environment
25 of 42
as an audio landscape from the simulated 3-D sound, for example MP3 format [57].

Sensors 2017, 17, 565 24 of 41

adjustment in luminance that exceeds a predefined threshold. However, the movement of the DVS
can generate events Figure
at the 36.
edges of the objects
The proposed systemortoat
beany changed
mounted on the sharp textures. As a result, the
head [56].
Figure 36. The proposed system to be mounted on the head [56].
accumulation of the time interval is needed in order to form a visual frame as it is illustrated in Figure
37. These sensors perform in a similar way to human retina [58,59]. So, unlike the regular cameras
whichThese
Asare sensors
shown in on
based perform
Figure 37, in
a fixed the a colors
framesimilaronway
rate, the to
DVS human
output
creates retina depth
[58,59].extraction
ofasynchronous
image So, unlike
events the
are
every regular
represented
time cameras
based
it senses an
which are based on a fixed frame rate, DVS creates asynchronous events every time
on the event distance. The scene is divided into three horizontal areas based on the vertical reference it senses an
adjustment
of that view.inThe
luminance
middle eventthat exceeds a predefined
will be selected. Then, threshold. However,
the event will the movement
be displayed onto simulated of the
3-
DVS can generate
D sound. events
This, in turn, willatbethe edges of to
translated the objects
audio or attoany
format thechanged
user using sharp textures.The
the headset. As Acoustic
a result,
the accumulation
domain was used forof the time
visual interval istransmission.
information needed in orderThe to form atovisual
distance frame
the object canasbeit calculated
is illustrated
via
in
theFigure
stereo37.
information of DVS device.

Figure 37.
Figure 37. The
The accumulation
accumulation ofof the
the interval
interval time
time for
for forming
forming aa visual
visual frame
frame and
and the
the entire
entire system
system is
is
illustrated (the
illustrated (the event
event distance
distance is
is differentiated
differentiatedviaviacolors)
colors)[56].
[56].

The system was tested on two different groups to evaluate three terms which are: vertical
As shown in Figure 37, the colors on the output of image depth extraction are represented based
position (up, down), object localization and horizontal position (left, right). The developed head-
on the event distance. The scene is divided into three horizontal areas based on the vertical reference
related transfer functions and the proposition of the focus area were used to promote resolution.
of that view. The middle event will be selected. Then, the event will be displayed onto simulated 3-D
Although it is not possible to assess the object avoidance performance due to the lack of
sound. This, in turn, will be translated to audio format to the user using the headset. The Acoustic
information provided by the authors, the structure of the device is comfortable and light. The system
domain was used for visual information transmission. The distance to the object can be calculated via
provides a power consumption solution by using less energy consumption components.
the stereo information of DVS device.
When
The Ultrasonic
system Sensors
was tested and different
on two Computer Vision
groups to Join Forces
evaluate forterms
three Efficient Obstacle
which Detection
are: vertical and
position
Recognition (Ultrasonic for ObstDetectRec)
(up, down), object localization and horizontal position (left, right). The developed head-related transfer
functions and thedevice
A wearable proposition of the focusinarea
was introduced [60]were used tothe
to support promote resolution.
mobility of visually-impaired people
over the civilian environment using sensors and computer vision techniques. Figure 38 illustrates the
main components of the hardware architecture, whereas four ultrasonic sensors and a mobile video
camera are the data sources and the smart phone is the processing unit. The device was able to
identify both static and dynamic objects indoor and outdoor regardless to the objects characters by
using the machine learning and computer vision techniques. Hence, the device provides continuous
Sensors 2017, 17, 565 26 of 42

Although it is not possible to assess the object avoidance performance due to the lack of
information provided by the authors, the structure of the device is comfortable and light. The system
provides a power consumption solution by using less energy consumption components.
When Ultrasonic Sensors and Computer Vision Join Forces for Efficient Obstacle Detection and Recognition
(Ultrasonic for ObstDetectRec)
A wearable device was introduced in [60] to support the mobility of visually-impaired people
over the civilian environment using sensors and computer vision techniques. Figure 38 illustrates the
main components of the hardware architecture, whereas four ultrasonic sensors and a mobile video
camera are the data sources and the smart phone is the processing unit. The device was able to identify
both static and dynamic objects indoor and outdoor regardless to the objects characters by using the
machine learning and computer vision techniques. Hence, the device provides continuous information
about 2017,
Sensors the surrounding
17, 565 area through audio feedback and peeps for unrecognized objects. 25 of 41
Sensors 2017, 17, 565 25 of 41

Figure 38. The prototype of the proposed system [60].


Figure 38. The prototype of the proposed system [60].
Figure 38. The prototype of the proposed system [60].
Figure 39 exhibits the process of the system, where two important modules were used; obstacle
Figure
detection 39 recognition
and exhibits the process
modules. of The
the system,
obstaclewhere
where two
two important
detection important
module is modules were
dependent onused; obstacle
the gathered
detection and
information recognition
from both themodules.
recognition modules. The
ultrasonicThe obstacle
sensors anddetection
smartphonemodule is dependent
camera, which willonbethefed
gathered
gathered
to the
informationmodule
recognition from both the ultrasonic
ultrasonic
to classify sensors
the present andofsmartphone
objects the scene. Incamera,
addition,which
audiowill be fedwill
feedback to the
the
be
recognition module to classify the present
generated based on the position andpresent objects
distanceobjects of
of theof the scene.
the scene.
object In addition,
comparedaddition, audio feedback
feedback
to the users position. will be
be
generated
generated based
based on
on the
the position
position and
and distance
distance of
of the
the object
object compared
compared to tothe
theusers
users position.
position.

Figure 39. The process of the proposed navigation system [60].


Figure 39.
Figure 39. The
The process
process of
of the
the proposed
proposed navigation
navigation system
system [60].
[60].
The integration of the proposed filter for the interested points and the points tracker (Lucas-
Kanade)Thereduced
integration
the of the proposed
exclusion filter for
time because the interested
it requires fewer points andHence,
resources. the points
RANSACtracker (Lucas-
was used
The integration of the proposed filter for the interested points and the points tracker
Kanade)
in order to reduced exclusion timetransformation
obtain the homographic because it requires fewer
between tworesources.
frames ofHence, RANSAC
the same was used
scene. Then, the
(Lucas-Kanade) reduced the exclusion time because it requires fewer resources. Hence, RANSAC
in order clustering
K-mean to obtain the homographic
algorithm transformation
was applied to identifybetween
varioustwo framesobjects.
dynamic of the same scene. Then,
The detected the
objects
was used in order to obtain the homographic transformation between two frames of the same scene.
K-mean
were clustering
classified algorithm
as urgent was applied
or normal objects.to identify
Urgent various
objects dynamic
are those objects.
whose The from
distance detected objects
the user is
Then, the K-mean clustering algorithm was applied to identify various dynamic objects. The detected
werethan
less classified as urgent or normal
2 m. Furthermore, objects.are
urgent objects Urgent objectsthat
the objects are those whose distance
are approaching fromotherwise,
the user, the user is
less than
they 2 m. Furthermore,
are normal objects. As aurgent objects
final step, theare
SVM theclassifier
objects that
wasare approaching
integrated with the
CHIuser, otherwise,
Square Kernel
theyclassification
for are normal objects.
training.As a final
Two step, the
thousand fiveSVM classifier
hundred was were
images integrated withfor
assigned CHI Square
each classKernel
(four
dynamic classes for outdoors) in the training stage, which is considered as a small number(four
for classification training. Two thousand five hundred images were assigned for each class for
dynamicclassification
accurate classes for outdoors)
rate. in the training stage, which is considered as a small number for
accurate
The classification
system can berate.
considered as a power consumption solution. Also, the integration of both
Sensors 2017, 17, 565 27 of 42

objects were classified as urgent or normal objects. Urgent objects are those whose distance from
the user is less than 2 m. Furthermore, urgent objects are the objects that are approaching the user,
otherwise, they are normal objects. As a final step, the SVM classifier was integrated with CHI Square
Kernel for classification training. Two thousand five hundred images were assigned for each class
(four dynamic classes for outdoors) in the training stage, which is considered as a small number for
accurate classification rate.
The system can be considered as a power consumption solution. Also, the integration of both the
sensors network and computer vision techniques validate the robustness and reliability of the obstacle
detection
Sensors 2017, and
17, 565recognition modules. However, the system was tested by 21 visually-impaired people. 26 of 41
As the users are more familiar with a white cane, their feedback was that the device is not trustworthy
user)
enoughto communicate
and needs to be with the visually-impaired
combined with the whiteperson cane. In viaaddition,
audio feedback.
the system UWB has
does a precision
not provide any of
up to 15 cm with
navigational a 95% confidence
information and the systeminterval.
doesUWB technology
not detect offers
obstacles robustness
above because
the waist level. it does not
need direct line of sight between tags and sensors. It uses UWB signals to acquire the persons location
SUGAR System
and orientation. The system also has a spatial database of the environment. This spatial database is a
mappingThe of
sugar system whichbeing
the environment was proposed
navigatedinby [61],
theprovides
person. visually-impaired people with guidance
in an indoor environment. It provides accurate
Other systems that use RIFD or NFC require the deployment positioning information
of a number using Ultra-Wide
of devices Band
to achieve
technology
the same accuracy(UWB).ofThe system
SUGAR. requires UWB
Installation of thesensors,
devicesain spatial databaseisofalso
key locations thean
environment, a server
expensive process.
to process
The range of theUWB
collected data,isWi-Fi
sensors 50 to connection
60 m whichtomakes transmit data and
it ideal a smart
for being phone (carried
deployed by thewith
in buildings user)
to communicate with the visually-impaired person via audio feedback. UWB
larger rooms. A room with a side length of 100 m requires only four UWB sensors while to achieve has a precision of up to
15 cm
the same with a 95% using
accuracy confidence
RFID interval.
or NFC wouldUWB technology offers robustness
require deployment of sensorsbecause
every 80 it cm.
doesFigure
not need
40
direct the
shows linephysical
of sight components
between tagsare and sensors.
needed It uses
for the UWB signals to acquire the persons location
System.
and We
orientation.
can inferThe thesystem
workflowalso of
hasthe
a spatial
systemdatabase
from the ofproposed
the environment. This which
architecture spatial isdatabase
shown isina
mapping
Figure 41. of the environment
It starts with the UWB being navigated
sensors by the tracking
constantly person. the person using a tag that carried by
Other
the user whichsystems that usethe
will enable RIFD or NFC
system require
to build the deployment
a Cartesian of a number
coordinate. of devices to
The smartphones achieve
compass
the same
would alsoaccuracy
provideofthe SUGAR.
persons Installation
orientation.of the
From devices in key
the data locations
collected, theisusers
also an expensive
location process.
is mapped
The
on a range
graph.ofOnce UWBthe sensors
personis decides
50 to 60on m which makes it ideal
the destination for being
the route plannerdeployed
moduleinselects
buildings with
the best
larger rooms. A room with a side length of 100 m requires only four UWB
route. As the person navigates the room, the navigation module compares their location and sensors while to achieve the
same accuracy
trajectory with using RFID or NFC
the previously would require
calculated route. The deployment
smartphoneof sensors
receivesevery
the80 cm. Figurevia
commands 40 Wi-Fi
shows
the physical
connection andcomponents
plays themare
back needed for the
through the System.
headphones to the person.

Figure
Figure40.
40.The
Thesystems
systemsinstallation
installationinside
insideaaroom
room[61].
[61].

We can infer the workflow of the system from the proposed architecture which is shown in
Figure 41. It starts with the UWB sensors constantly tracking the person using a tag that carried by
the user which will enable the system to build a Cartesian coordinate. The smartphones compass
would also provide the persons orientation. From the data collected, the users location is mapped on
a graph. Once the person decides on the destination the route planner module selects the best route.
As the person navigates the room, the navigation module compares their location and trajectory with

Figure 41. The proposed architecture [61].


Sensors 2017, 17, 565 28 of 42

the previously calculated route. The smartphone receives the commands via Wi-Fi connection and
plays them back through the headphones
Figure to theinstallation
40. The systems person. inside a room [61].

Figure 41. The proposed architecture [61].


Figure 41. The proposed architecture [61].

3. Analysis
3. Analysis
In this section, we are analyzing the basic, yet the most important features for each device that
In this section, we are analyzing the basic, yet the most important features for each device that
we reviewed. These five features are described in Table 1. Furthermore, we are presenting here a
we reviewed. These five features are described in Table 1. Furthermore, we are presenting here a
quantitative evaluation for the reviewed systems in terms of their progress based on the main features
quantitative evaluation for the reviewed systems in terms of their progress based on the main features
that need to be provided by any system that offers a service for visually-impaired people.
that need to be provided by any system that offers a service for visually-impaired people.
The assistive device for a blind person needs to provide several features, among them: a clear
and concise information within seconds, a consistent performance during day and night time, works
indoors and outdoors; detects objects from close to further than 5 m; and detects the static and dynamic
objects in order to handle any sudden appearance of objects; otherwise, the users life is at risk.
The evaluated features are basic and fundamental features to design an assistive device for blind
people and to rely on their performance. Therefore, we give them the same weight which is 10.0, as
each feature has a significant impact on the systems performance. Based on the collected information,
we gave a score for each feature of each system or device.
Since some of the evaluated systems are still in a research stage, the users feedback was considered
in our evaluation for the devices that were tested in real scenarios only. Otherwise, our evaluation
mechanism was applied based on the following criteria: the features are user-dependent. So, it is
different from user to other. For example, some people are not interested in going outside at night,
then the day/night feature is irrelevant for them. Therefore, we have weighted all the features with
equal weight 10.0.
The value of each feature of each system are referred to as Vk. This value is between 0 and 10.0.
The value 10.0 is assigned to a fully satisfactory feature; however, prorated values will be given to
the feature in case it is not fully satisfying the criteria in Table 1. For example, we gave value 5 to
a system that performs only indoors whereas it is supposed to perform indoors and outdoors, e.g.,
the Smart Cane. This strategy was applied for analysis type, coverage, range, time, and object type
feature. However, the assigned values for range feature were applied differently, we could not give
equal values for different ranges, where we are looking for devices that provide a larger detection
range. So, a 2.5 value was given to those with detection range less or equal to 1 m. This range is a very
low range and cannot be considered as a solution to substitute a white cane. We meant to give this low
value to insist and show the importance of providing further ranges comparing to this low range.
Sensors 2017, 17, 565 29 of 42

We used the following normalization formula (equation 3) to calculate the total score for each
system based on Table 2. The total score of each system in Table 2 is to give a quick evaluation on how
the device is or is not satisfied. However, a full review is provided in Table A1 (Appendix A).

N
10 Vk
Total Score = N
+2 (3)
k =0

We give constant value 2 to give a clear bias in the graph and to show the clear difference between
the systems and supported features. N refers to the total number of features of each system and k is the
particular feature. Table 2 shows the evaluation for the most promising systems found in the literature.

Table 2. Score and evaluation for each system.

Features
Real
Coverage (Indoor, Time (Day, Range (R 1 m, Object Type (Static, Total
System Time/not
Outdoor, both) Night, both) 1 m < R 5 m, Dynamic, both) Score
Real Time R > 5 m)
Weight of 10
*Smart Cane 10 5 5 5 5 62
*Eye Subs 10 5 10 5 5 72
*FAV&GPS 10 5 5 - 10 62
*BanknotRec 10 5 5 - 5 52
*TED 10 5 10 - 5 62
*CASBlip 10 10 10 5 5 82
*RFIWS - 5 10 5 5 52
*LowCost Nav 10 5 10 - 5 62
*ELC 10 5 10 2.5 5 67
*CG System 10 5 5 5 5 62
*UltraCane 10 5 10 5 5 72
*Obs Avoid using
10 5 5 5 10 72
Thresholding
*Obs Avoid using 10 5 5 10 5 72
Haptics&Laser
*ComVis Sys 10 10 5 10 10 92
*Sili Eyes - 5 - 5 5 32
*PF belt - 5 - 2.5 10 37
*EyeRing 10 10 5 Specific case 10 5 82
*FingReader 10 10 5 Specific case 10 5 82
*Nav RGB-D 10 5 5 5 5 62
*Mobile Crowd 10 5 10 - 5 62
Ass Nav
*DBG Crutch Based 10 5 5 5 5 62
MSensors
*Ultra Ass Headset 10 10 10 5 5 82
*MobiDevice
Improved 10 5 5 10 10 82
VerticleResolution
*Ultrasonic for 10 10 5 5 10 82
ObstDetectRec
*SUGAR System 10 5 5 10 5 72

4. Conclusions and Discussion


Table 2 shows that none of the evaluated systems was 100% satisfactorily in terms of the essential
features. These features not only meet the users needs, but are also crucial from an engineering
perspective. Those features are the main building blocks to design such a device to provide services
for blind people. It is remarkable that each system supported special feature(s) over the other and
might have more features than the other, but none of them supported all the evaluated features. That
means we cannot consider any of them as an ideal device or system that the blind person can rely on
and feel confident about using. Devices that have all the fundamental features will offer an effective
performance. The reason for this limitation is that most of the researchers work on providing a new
feature, but they never ensure that they support the fundamental features before they add new ones.
Another reason for this is that the designers do not run enough experiments which have to be done
and tested on the blind people with different scenarios to overcome any issue. The ideal device has to
not only include a new feature but also to satisfy the main and basic needs of the user. The user needs
to feel the sense of the surrounding environment at all times and everywhere. The system cannot be
limited for specific case, otherwise, we have an incomplete design.
Sensors 2017, 17, 565 30 of 42

Figure 42 shows us a full picture of the evaluation for each system with total score to each one.
Systems with higher score demonstrate solid and improved features such as a Computer Vision System
that Ensure the Autonomous Navigation (ComVis Sys) which includes most of the features. The Path
Force Feedback Belt (PF belt) and other systems that have lower scores need more enhancement, yet
that does not mean the value of their works is less than the systems that have higher scores. So, PF belt
has score of 37% because it is not a real time (it is in the research stage): it is applied only outdoors and
it is not suitable indoors, the detection range is 1 m which is considered to be a very small range and it
is limited in scope. In this evaluation, we are trying to pave the road for other researchers to design
devices that ensure safety and independent mobility to the visually-impaired people. The total score
in Figure 42 is reflecting the giving values for each feature of each system in Table 2. In conclusion, the
performance of most of the studied systems is not 100% satisfactory to the users need.
Sensors 2017, 17, 565 29 of 41

Figure 42. Systems evaluation presents the total score for each system.
Figure 42. Systems evaluation presents the total score for each system.
Our aim in this paper is to shed some light on the missing features for the most useful and
Oursignificant
aim in devices.
this paper Sinceisthe
totechnology
shed some is inlight
advance everymissing
on the day, our features
work is tofor
make thethis progress
most useful and
happen as early as possible. Our focus in this paper is on the performance of systems; and after careful
significant devices. Since the technology is in advance every day, our work is to make this progress
review and study of the above systems, we developed the benchmark table (Appendix A) that
happen as early as possible. Our focus in this paper is on the performance of systems; and after careful
includes technical perspective parameters that effected the systems performance and their
review and study of the above systems, we developed the benchmark table (Appendix A) that includes
unavailability might prevent the systems from offering the main and basic features that we discussed
technical perspective
in Table 1. Those parameters
parameters that effected
effected the systems
the performance performance
of the systems whichandshould
their unavailability
meet both the might
preventusers
the systems from offering the main and basic features that we discussed
needs and the engineers viewpoints. Both the type of the sensors used and the techniques in Table 1. Those
that
parameters effected
are used can leadthe to
performance
limitations ifof wethe systems
misused which
them. should meet
For example, both
systems thatthe
usedusers needs and
infrared
the engineers
technology viewpoints.
may not have Both the type
performed wellof the sensors
during the day timeuseddueand the
to the techniques
sensitivity of thethat are to
infrared used can
the sunlight [62]. Whereas, systems used the Radio Frequency
lead to limitations if we misused them. For example, systems that used infrared technologyIdentification cannot offer a largemay not
range due to
have performed wellthe during
need forthetagsday
installation
time due everywhere the system is
to the sensitivity ofused
the [63]. Also, to
infrared Kinect sensor
the sunlight [62].
shows a small range as the accuracy of the Kinect sensor decreases as the distance between the scene
Whereas, systems used the Radio Frequency Identification cannot offer a large range due to the
and sensor increases [64,65]. In addition, the performance of ultrasonic sensors can be affected as
need for tags installation everywhere the system is used [63]. Also, Kinect sensor shows a small
whether the environmental parameters changed or not [66]. Hence, its maximum detection range is
range asaround
the accuracy
5 m. The of the Kinect
limitation sensor
of each systemdecreases as the
is described distance in
individually between
Appendix theA scene
with moreand sensor
increases [64,65]. In addition,
comprehensive review from the performance
technical side. of ultrasonic sensors can be affected as whether the
environmental Otherparameters changed
interesting devices or not
for blind [66]. Hence,
running athletes its
weremaximum
reviewed, butdetection range isinaround
are not included our 5 m.
The limitation
paper due oftoeach
their system is described
limited scope [67,68]. Theindividually
running fieldsin is aAppendix A with
designed field whichmore
will notcomprehensive
include
general
review from obstacles side.
technical such as stairs. Also, the field is expected to have lines to direct the running athletes.
As summarydevices
Other interesting to our evaluation,
for blindFigure
running43 shows, for every
athletes were system,
reviewed,the penetration
but are not rate of each in our
included
feature and its weight. For example, three out of the total presented systems are
paper due to their limited scope [67,68]. The running fields is a designed field which will not include not real time systems,
which means they are still in a research stage. Those are Sili Eyes, RFIWS, and PF belt. However, 72%
general obstacles such as stairs. Also, the field is expected to have lines to direct the running athletes.
of the systems have three features that are not fully satisfied. For instance, Eye Subs system provides
outdoor coverage but not indoor coverage; the detection range is less than 5 m due to the ultrasonic
limitation, and it detects only the static and not dynamic objects. This leads to one point that the
researchers are aware of some of the fundamental features such as real time feature but not to others.
So, some systems provide indoor coverage but not outdoor coverage, but the user will be in need of
the system service as much indoors as outdoors, maybe even more. With this humble study, we hope
Sensors 2017, 17, 565 31 of 42

As summary to our evaluation, Figure 43 shows, for every system, the penetration rate of each
feature and its weight. For example, three out of the total presented systems are not real time systems,
which means they are still in a research stage. Those are Sili Eyes, RFIWS, and PF belt. However, 72%
of the systems have three features that are not fully satisfied. For instance, Eye Subs system provides
outdoor coverage but not indoor coverage; the detection range is less than 5 m due to the ultrasonic
limitation, and it detects only the static and not dynamic objects. This leads to one point that the
researchers are aware of some of the fundamental features such as real time feature but not to others.
So, some systems provide indoor coverage but not outdoor coverage, but the user will be in need of
the systemSensors
service
2017,as
17,much
565 indoors as outdoors, maybe even more. With this humble study,30we of 41hope

that we could provide enough description of the main features that need to be included in any system
that we could provide enough description of the main features that need to be included in any system
that serves this group of people.
that serves this group of people.

Figure 43. Features overview for each system.


Figure 43. Features overview for each system.
At the end of this discussion, we emphasize that this paper provides a set of essential guidelines
At thefor
enddesigning assistive devices
of this discussion, along withthat
we emphasize thethis
mentioned features atoset
paper provides ensure a satisfactory
of essential guidelines
performance and better computer interaction scheme with the blind person. These guidelines
for designing assistive devices along with the mentioned features to ensure a satisfactory performance
and better include:
computer interaction scheme with the blind person. These guidelines include:
Performance: all the needed functions that are listed in Table 1 should be supported.
Performance: all the needed functions that are listed in Table 1 should be supported.
Wireless connectivity: the assistive device needs to be wirelessly connected with a database to
Wireless connectivity:
ensure the assistive device needs to be wirelessly connected with a database to
information exchange.
ensure information exchange.
Reliable: the device should meet its specification for both software and hardware.
the device
Reliable: Simple: should
simple meet
interface itsfriendly
and specification
operationsforcan
both software
make the useand hardware.
of the device easier to the
Simple: simple interface and friendly operations can make the use of the device easier to the user.
user.
Wearable:Wearable:
from ourfrom our and
study studyreview,
and review,
it is itmore
is more flexibleand
flexible andcomfortable
comfortable toto
the user
the to wear
user the the
to wear
device rather than
device rather than carry it. carry it.
Economically accessible: it is important to make the device economically accessible for the users
Economically accessible: it is important to make the device economically accessible for the users
in order to enhance their quality of life, otherwise, only a few people can afford it.
in order to enhance their quality of life, otherwise, only a few people can afford it.
We are planning to continue this review by studying each function individually to overcome the
We are planningweaknesses
mentioned to continue this review
by designing by studying
an intelligent each
frame function
work individually
that offers all the above to overcome
features with the
mentionedmore
weaknesses
scalabilityby
anddesigning an intelligent
that is economically frame work that offers all the above features with
accessible.
more scalability and that is economically accessible.
Author Contributions: The work has been primarily conducted by Wafa Elmannai under the supervision of
Khaled Elleithy. Wafa Elmannai wrote the manuscript. Extensive discussions about the algorithms and
Author Contributions: The work
techniques presented in thishas been
paper primarily
were conducted
carried between the twoby Wafaover
authors Elmannai under the supervision of
the past year.
Khaled Elleithy. Wafa Elmannai wrote the manuscript. Extensive discussions about the algorithms and techniques
presented inConflicts of Interest:
this paper The authors
were carried declare
between thenotwo
conflict of interest.
authors over the past year.
Conflicts of Interest: The authors declare no conflict of interest.
Sensors 2017, 17, 565 32 of 42

Appendix A

Table A1. Evaluation of reviewed systems based on addition features that caused that limitations of each system.

Used
System Object
Measuring Classification Techniques for
Name/Weight/Type Type of the Accuracy Coverage Cost Limitation Detection
Analysis Type Angle Day/Night Objects Detection,
of Usage Sensors Range
(Dynamic/Static) Recognition or
(Max/Min)
Localization
The water sensor cant
detect the water if it is
less than 0.5 deep.
Smart Cane Ultrasonic Outdoor The buzzer wont stop
Weight: N/A sensors (only areas Ultrasonic
N/A Real time N/A High before it is dry. Day 1 m1.5 m Static
Type of usage: Water have technology
pilot stage detector A power supply meter
RFID tags)
reading needs to be
installed to track
the status
The design of the system
is uncomfortable due to
the wood foundation
which will be carried by
the user most of the time
as well as and the
Eye Substitution 2 Ultrasonic figures holes. GPS, GSM,
Weight: light Each sensor and GPRS
Sensors The team used 3 motors 2 m3 m Static
Type of usage: N/A Real time Outdoor has a cone $1790 Day/Night Ultrasonic
Vibrator
pilot stage angle of 15 for haptic feedback. They
technology
motors
could use a 2-d array of
such actuators that can
give feedback about
more details.
Limited use by only
Android devices

The system was tested on Global Position


Optical System (GPS),
the function of the objects
Sensors 6 of visual avoidance technique. The Modified
Bumble bee angle with Geographical
Fusion of Artificial system has not been
Stereo Accurate (320 240 Information
Vision and GPS Camera results for tested or integrated with System (GIS)
Weight: N/A pixel) and Day 2 m10 m
3-axis Real time Outdoor low navigation systems to Static/dynamic
Type of usage:
user 100 field of and vision based
Accelerometers position insure its performance; positioning
deployment stage view with
Electronic (640 480 whether it will enhance SpikNet was
compass pixel) the navigation systems as
used as
Pedometer the authors promised or recognition
not is unknown. algorithm
Sensors 2017, 17, 565 33 of 42

Table A1. Cont.

Used
System Object
Measuring Classification Techniques for
Name/Weight/Type Type of the Accuracy Coverage Cost Limitation Detection
Analysis Type Angle Day/Night Objects Detection,
of Usage Sensors Range
(Dynamic/Static) Recognition or
(Max/Min)
Localization
This device was tested
only on the Thai
banknotes and coins, and
it is not capable of
Banknote
Recognition working on other RGB model
Weight: N/A iV-CAM 80% currencies that have Day Static Banknotes
Real time N/A N/A low Closed View
Type of usage: similar colors of Classification
pilot stage banknotes or similar Algorithm
sizes of coins.
The device needs a
method that controls the
natural light that is used
The
corresponds
are based Antenna is not
on the omni-directional.
TED feeling on The range of voltage is not
enough to supply TonguePlaced
Weight: light Detective the dorsal Real time Outdoor N/A low Day/Night N/A Static Electro tactile
Type of usage: Camera part of the the device.
pilot stage It is more difficult to Display
tongue,
(1,2,3,4) recognize the pulses on
100% the edges of the tongue.
(7) 10%
(5,6,8) 50%
Small detection range
80% in Image acquisition Binaural
range of 0.5 Acoustic module
CASBlip technique needs more Multiple double
Weight: N/A 3D CMOS m5 m and Indoor/ 64 in than 1X64
less than Real time N/A Day/Night 0.5 m5 m Static short-time
Type of usage: sensor outdoor azimuth CMOS image sensor. integration
pilot stage 80% with
further Acoustic module needs to algorithms
distance be improved (it can add
(MDSI)
sounds in elevation)
Collision of RFID
Each tag needs specific
rang which needs to be
RFIWS tested separated (scoop
Weight: N/A Ultra-high
None N/A Not-Real time Outdoor N/A N/A limitation) Day/Night 1 m3 m Static
Type of usage: frequency (UHF)
research stage The tags cannot read the
radio waves if case these
tags get wrapped up or
covered.
Good The accuracy of GPS
A Low Cost Outdoor accuracy receiver in high rise
3 Axial GPS technology
Assistive within
Navigation System accelerometer building is degraded. Geo-Coder-US
sensors residential Real time Outdoor N/A $138 Day N/A Static
Weight: N/A Limited scope, the GPS Module
Magnetometer area, but
Type of usage: receiver needs to be MoNav
sensor not as in an
pilot stage connected via Bluetooth ModuleBluetooth
urban
environment to perform.
Sensors 2017, 17, 565 34 of 42

Table A1. Cont.

Used
System Object
Measuring Classification Techniques for
Name/Weight/Type Type of the Accuracy Coverage Cost Limitation Detection
Analysis Type Angle Day/Night Objects Detection,
of Usage Sensors Range
(Dynamic/Static) Recognition or
(Max/Min)
Localization
Ultrasonic
It is a detector device for
ELC Ultrasonic sensor
physical obstacles above Close objects
Weight: 0.170 Kg sensor technology
N/A Real time Outdoor N/A N/A the waist line but the Day/Night over the Static
Type of usage: Micro-motor Haptics and
deployment stage navigation still relies on waistline
actuator tactile
the blind person.
techniques
Kinect Only 49 Fuzzy rules were The Canny filter
sensor for edge
Video covered which cover 80
different configurations. detection.
camera Stereo vision,
Cognitive Guidance stereo The perception capacities
vanishing point
System Imaging of the system need to be
and fuzzy rules
Weight: N/A sensor N/A Real time Indoor 180 N/A increased to detect Day 1.5 m4.0 m Static
Type of usage: sonny spatial landmarks. (fuzzy logic and
pilot stage ICx424 Improve the stabilization Mandani fuzzy
(640 480) of reconstructed walking decision system)
RBG-D plane and its registration to infer about
sensor for the distances
3D point through the frame. of objects.
Ultrasonic
sensor
(trans-receiver)
Ultrasonic Cane as a Arduino Just an object detector
Navigation Aid UNO Small detection rang
Weight: light 30 5150 cm Static Ultrasonic
microcontroller N/A Real time Indoor N/A Day/Night
Type of usage: Does not detect objects Technology
wireless
pilot stage X-bee S1 that suddenly appear
trans
receiver
module
The accuracy of Kinect Auto-adaptive
depth image decreases Thresholding
when the distance (divides equally
between the scene and
a depth image
Obstacle Avoidance sensor increase.
Using Auto-adaptive Auto-adaptive threshold into three areas.
Kinects Horizontal
It finds the
Thresholding depth 57.50 and could not differentiate Day 0.8 m4 m
Weight: N/A N/A Real time Indoor N/A Static/dynamic most optimal
camera Vertical between the floor and the
Type of usage: 43.5 object after 2500 mm. threshold value
pilot stage automatically
That increases the average
error of distance detection. (auto) and vary
The depth camera has to among each of
be carried which is a lot of those areas
load on the users hand. (adaptive).
Sensors 2017, 17, 565 35 of 42

Table A1. Cont.

Used
System Object
Measuring Classification Techniques for
Name/Weight/Type Type of the Accuracy Coverage Cost Limitation Detection
Analysis Type Angle Day/Night Objects Detection,
of Usage Sensors Range
(Dynamic/Static) Recognition or
(Max/Min)
Localization
Basely the
system was
built on the
use of laser
Obstacle Avoidance but the
Using Haptics and a Novint Precise location of
Laser Rangefinder Horizontal Haptics and
Falcon has 20 m with 3
Weight: N/A N/A Real time Indoor 270 in front N/A obstacles and angles were Day Static a Laser
Encoder cm error
Type of usage: of chair difficult to determine. Rangefinder
LED
pilot stage emitters
and photo
sensors
Supplementary
Sensors
Their fixed sizes of the
image based on the
category can make
detecting the same object
with different
sizes a challenge.
Since the proposed
system is based on a LucasKanade
algorithm and
smartphone video camera;
RANSAC
if the video camera is algorithm are
A Computer Vision covered by the blind
used for
System that Ensure persons clothes, then the detection.
Angular
the Autonomous system cannot work. Adapted HOG
Navigation Monocular High Indoor/ field of low Day Up to 10 m
Real time The objects are in dark Static/Dynamic descriptor
Weight: N/A camera Accuracy outdoor camera
Type of usage: view of 69 places and highly extractor, BoVW
dynamic objects vocabulary
deployment stage
cannot be detected. development
The overhead and noise of and SVM
smartphones videos. training are used
The tested dataset of 4500 for recognition.
images and dictionary of
4000 words are considered
as a small dataset.
The system is tested and it
works only on a Samsung
S4 which makes it
limited in scope.
Sensors 2017, 17, 565 36 of 42

Table A1. Cont.

Used
System Object
Measuring Classification Techniques for
Name/Weight/Type Type of the Accuracy Coverage Cost Limitation Detection
Analysis Type Angle Day/Night Objects Detection,
of Usage Sensors Range
(Dynamic/Static) Recognition or
(Max/Min)
Localization
A power supply meter
24-bit color
sensor reading needs to be
SONAR installed to
obstacle track the status.
Silicon Eyes detection Low accuracy of GPS
Weight: N/A light sensor receiver in high GPS & GSM
N/A Not-Real time Not tested N/A N/A Not tested 2.5 cm3.5 m Static
Type of usage: 3-axis rise buildings. technology
research stage MEMS The haptic feedback
magnetometer is not efficient.
3-axis Limited memory of 2 GB
MEMS
Accelerometer micro-SD card to save
user information.
The detection range for
this design is too small.
The user needs to be
IR sensor trained in differentiating
A Path Force Two depth the vibration
Feedback Belt sensors 360 over patterns for each cell. Infrared
Weight: N/A (sensor 2 N/A Not-Real Time Outdoor the blinds N/A Using vibration patterns Not tested Short Static/dynamic technology
Type of usage: dual video waist as feedback instead of and GPS
research stage cameras audio format is not an
type Kinect) excellent solution as the
person can lose the sense
of discrimination of such
techniques over the time.
The system does not
Atmel 8 bit provide a real time
EyeRing microcontroller video feedback. Roving
Weight: N/A OV7725 Not The system is limited to
VGA CMOS N/A Real time Indoor/ N/A Day Close up view Static Networks
Type of usage: outdoor Applicable single object detection, RN-42 Bluetooth
pilot stage sensor for
image which cannot be very module
acquisition useful to the
disabled person.
There is a real time
response for the audio
feedback, but there is a
Atmel 8 bit long stop between the
microcontroller instructions. Also, the
OV7725 Real time system prototype contains Roving
FingerReader VGA CMOS
Weight: N/A tactile Not two pieces one is the ring,
sensor for 93.9% Indoor/ Day Static Networks
Type of usage: feedback20 m Applicable N/A Close up view
image outdoor the other is the RN-42 Bluetooth
pilot stage processing
acquisition time computation element module
Vibration which need to be carried
motors all the time by the user for
I/0 speech, otherwise the
user will not be able to
receive the feedback.
Sensors 2017, 17, 565 37 of 42

Table A1. Cont.

Used
System Object
Measuring Classification Techniques for
Name/Weight/Type Type of the Accuracy Analysis Coverage Cost Limitation Detection
Angle Day/Night Objects Detection,
of Usage Sensors Type Range
(Dynamic/Static) Recognition or
(Max/Min)
Localization
RANdom
Sample
Consensus
Up to 3 m (RANSA)
Navigation The effective of the using range detection
Assistance Using infrared to the sunlight information algorithm
RGB-D Sensor With can negatively affect the technique and Image intensities
Range Expansion RGB-D sensor 95% Real time Indoor N/A low Night Static and depth
Weight: N/A performance of the from 3 m and
Type of usage: system outdoors and further using information
pilot stage during the day time. the vision (computer
information vision)
Infrared
technology and
density images
Mobile Crowd The collected information Crowd sounding
Assisted Navigation 20.5% is based on the volunteers
Camera availability. service through
for the improvement
GPS Goagle engine
Visually-impaired Compass in crowd Real time Indoor N/A N/A There is a possibility of no Day/Night N/A Dynamic
Weight: N/A sound for input in the interval time for navigation
Type of usage: Accelerometer Machine vision
navigation which fails the goal of the
pilot stage service. algorithm

A Design of 30 The detection


Blind-guide detection range is small.
Crutch Based on range for 2 This system is claimed to Ultrasonic
3 Ultrasonic sensors, 80 0 m2 m in distance
Multi-sensors N/A Real time Outdoor N/A be navigation system, Day Static
sensors detection front measurement
Weight: N/A however, there are no
Type of usage: range for approach
given directions
deployment stage overhead to the user.
Ultrasonic Assistive 4 Ultrasonic
Headset for
visually-impaired
type 60 between Limited directions
(DYP-ME007) Indoor/ ultrasonic are provided. Ultrasonic
people N/A Real time N/A Day/Night 3 cm4 m Static
Weight: light sensor outdoor distance The headset obscures the technology
Type of usage: obstacle sensors external noise.
pilot stage detector

The modules are


A Mobility Device 99% very expensive.
for the Blind with object Further intensive tests
Improved Vertical 2 detection, need to be done to show
Resolution Using retine-inspired 90% 8% the performance object
Dynamic Vision avoidance and navigation Day 0.5 m8 m Event-based
dynamic horizontal Real time Indoor N/A low Dynamic/static
Sensors algorithm
vision sensors localization, techniques, whereas, the
Weight: light (DVS) 96% 5.3% test was mainly on object
Type of usage: size detection technique for
pilot stage discrimination the central area
of the scene.
Sensors 2017, 17, 565 38 of 42

Table A1. Cont.

Used
System Object
Measuring Classification Techniques for
Name/Weight/Type Type of the Accuracy Coverage Cost Limitation Detection
Analysis Type Angle Day/Night Objects Detection,
of Usage Sensors Range
(Dynamic/Static) Recognition or
(Max/Min)
Localization
The system cannot detect
obstacles above Vision-based
Ultrasonic for waist level. object detection
ObstDetectRec 4 ultrasonic
sensors Indoor/ There is no navigational module.
Weight: 750 gram
(Maxsonar
N/A Real time
outdoor
40 Low Day 2<R5m Static/dynamic
Type of usage: information provided. Ultrasonic
pilot stage LV EZ-0) Small detection range. technology.
It is not an SVM
independent device.
UWB
Sensors would have to be positioning
deployed in every room. technique
Day
SUGAR system The room has to be Path Finding
Weight: N/A Ultra-wide (the system
band High Real time Indoor N/A N/A
mapped beforehand. 50 m60 m static
Type of usage: Accuracy was not Algorithm
Sensors(UWB) User needs to select tested for
pilot stage destination beforehand. Time Difference
night time) of Arrival
It is not suitable for
outside use. technique
(TDOA)
* Not Available online: N/A.
Sensors 2017, 17, 565 39 of 42

References
1. World Health Organization. Visual Impairment and Blindness. Available online: http://www.Awho.int/
mediacentre/factsheets/fs282/en/ (accessed on 24 January 2016).
2. American Foundation for the Blind. Available online: http://www.afb.org/ (accessed on 24 January 2016).
3. National Federation of the Blind. Available online: http://www.nfb.org/ (accessed on 24 January 2016).
4. Velzquez, R. Wearable assistive devices for the blind. In Wearable and Autonomous Biomedical Devices and
Systems for Smart Environment; Springer: Berlin/Heidelberg, Germany, 2010; pp. 331349.
5. Baldwin, D. Wayfinding technology: A road map to the future. J. Vis. Impair. Blind. 2003, 97, 612620.
6. Blasch, B.B.; Wiener, W.R.; Welsh, R.L. Foundations of Orientation and Mobility, 2nd ed.; AFB Press: New York,
NY, USA, 1997.
7. Shah, C.; Bouzit, M.; Youssef, M.; Vasquez, L. Evaluation of RUNetra tactile feedback navigation system for
the visually-impaired. In Proceedings of the International Workshop on Virtual Rehabilitation, New York,
NY, USA, 2930 August 2006; pp. 7277.
8. Hersh, M.A. The Design and Evaluation of Assistive Technology Products and Devices Part 1: Design.
In International Encyclopedia of Rehabilitation; CIRRIE: Buffalo, NY, USA, 2010.
9. Marion, A.H.; Michael, A.J. Assistive technology for Visually-impaired and Blind People; Springer: London,
UK, 2008.
10. Tiponut, V.; Ianchis, D.; Bash, M.; Haraszy, Z. Work Directions and New Results in Electronic Travel Aids for
Blind and Visually Impaired People. Latest Trends Syst. 2011, 2, 347353.
11. Tiponut, V.; Popescu, S.; Bogdanov, I.; Caleanu, C. Obstacles Detection System for Visually-impaired
Guidance. New Aspects of system. In Proceedings of the 12th WSEAS International Conference on SYSTEMS,
Heraklion, Greece, 1417 July 2008; pp. 350356.
12. Dakopoulos, D.; Bourbakis, N.G. Wearable obstacle avoidance electronic travel aids for blind: A survey.
IEEE Trans. Syst. Man Cybern. Part C 2010, 40, 2535. [CrossRef]
13. Renier, L.; De Volder, A.G. Vision substitution and depth perception: Early blind subjects experience visual
perspective through their ears. Disabil. Rehabil. Assist. Technol. 2010, 5, 175183. [CrossRef]
14. Tapu, R.; Mocanu, B.; Tapu, E. A survey on wearable devices used to assist the visual impaired user navigation
in outdoor environments. In Proceedings of the 2014 11th International Symposium on Electronics and
Telecommunications (ISETC), Timisoara, Romania, 1415 November 2014.
15. Liu, J.; Liu, J.; Xu, L.; Jin, W. Electronic travel aids for the blind based on sensory substitution. In Proceedings
of the 2010 5th International Conference on Computer Science and Education (ICCSE), Hefei, China,
2427 August 2010.
16. Snchez, J.; Elas, M. Guidelines for designing mobility and orientation software for blind children. In
Proceedings of the IFIP Conference on Human-Computer Interaction, Janeiro, Brazil, 1014 September 2007.
17. Farcy, R.; Leroux, R.; Jucha, A.; Damaschini, R.; Grgoire, C.; Zogaghi, A. Electronic travel aids and electronic
orientation aids for blind people: Technical, rehabilitation and everyday life points of view. In Proceedings
of the Conference & Workshop on Assistive Technologies for People with Vision & Hearing Impairments
Technology for Inclusion, Los Alamitos, CA, USA, 911 July 2006.
18. Kammoun, S.; Marc, J.-M.; Oriola, B.; Christophe, J. Toward a better guidance in wearable electronic
orientation aids. In Proceedings of the IFIP Conference on Human-Computer Interaction, Lisbon, Portugal,
59 September 2011.
19. Wahab, A.; Helmy, M.; Talib, A.A.; Kadir, H.A.; Johari, A.; Noraziah, A.; Sidek, R.M.; Mutalib, A.A. Smart
Cane: Assistive Cane for Visually-impaired People. Int. J. Comput. Sci. Issues 2011, 8, 4.
20. Bharambe, S.; Thakker, R.; Patil, H.; Bhurchandi, K.M. Substitute Eyes for Blind with Navigator Using
Android. In Proceedings of the India Educators Conference (TIIEC), Bangalore, India, 46 April 2013;
pp. 3843.
21. Vtek, S.; Klima, M.; Husnik, L.; Spirk, D. New possibilities for blind people navigation. In Proceedings of
the 2011 International Conference on Applied Electronics (AE), Pilsen, Czech, 78 September 2011; pp. 14.
22. Brilhault, A.; Kammoun, S.; Gutierrez, O.; Truillet, P.; Jouffrais, C. Fusion of artificial vision and GPS to
improve blind pedestrian positioning. In Proceedings of the 4th IFIP International Conference on New
Technologies, Mobility and Security (NTMS), Paris, France, 710 February 2011; pp. 15.
Sensors 2017, 17, 565 40 of 42

23. White, C.E.; Bernstein, D.; Kornhauser, A.L. Some map matching algorithms for personal navigation
assistants. Trans. Res. C Emerg. Tech. 2000, 8, 91108. [CrossRef]
24. Loomis, J.M.; Golledge, R.G.; Klatzky, R.L.; Speigle, J.M.; Tietz, J. Personal guidance system for the visually
impaired. In Proceedings of the First Annual ACM Conference on Assistive Technologies, Marina Del Rey,
CA, USA, 31 October1 November 1994.
25. Delorme, A.; Thorpe, S.J. SpikeNET: An event-driven simulation package for modelling large networks of
spiking neurons. Netw. Comput. Neural Syst. 2003, 14, 613627. [CrossRef]
26. Sirikham, A.; Chiracharit, W.; Chamnongthai, K. Banknote and coin speaker device for blind people. In
Proceedings of the 11th International Conference on Advanced Communication Technology (ICACT),
Phoenix Park, Korea, 1518 February 2009; pp. 21372140.
27. Dunai Dunai, L.; Chillarn Prez, M.; Peris-Fajarns, G.; Lengua Lengua, I. Euro Banknote Recognition
System for Blind People. Sensors 2017, 17, 184. [CrossRef] [PubMed]
28. Nguyen, T.H.; Le, T.L.; Tran, T.T.H.; Vuillerme, N.; Vuong, T.P. Antenna Design for Tongue electrotactile
assistive device for the blind and visually-impaired. In Proceedings of the 2013 7th European Conference on
Antennas and Propagation (EuCAP), Gothenburg, Sweden, 812 April 2013; pp. 11831186.
29. Icheln, C.; Krogerus, J.; Vainikainen, P. Use of Balun Chokes in Small-Antenna Radiation Measurements.
IEEE Trans. Instrum. Meas. 2004, 53, 498506. [CrossRef]
30. Nguyen, T.H.; Nguyen, T.H.; Le, T.L.; Tran, T.T.H.; Vuillerme, N.; Vuong, T.P. A wearable assistive device for
the blind using tongue-placed electrotactile display: Design and verification. In Proceedings of the 2013
International Conference on Control, Automation and Information Sciences (ICCAIS), Nha Trang, Vietnam,
2528 November 2013.
31. Dunai, L.; Garcia, B.D.; Lengua, I.; Peris-Fajarns, G. 3D CMOS sensor based acoustic object detection and
navigation system for blind people. In Proceedings of the 38th Annual Conference on IEEE Industrial
Electronics Society (IECON 2012), Montreal, QC, Canada, 2528 October 2012.
32. Saaid, M.F.; Ismail, I.; Noor, M.Z.H. Radio frequency identification walking stick (RFIWS): A device for
the blind. In Proceedings of the 5th International Colloquium on Signal Processing & Its Applications,
Kuala Lumpur, Malaysia, 68 March 2009.
33. Harrison, M.; McFarlane, D.; Parlikad, A.K.; Wong, C.Y. Information management in the product lifecycle-the
role of networked RFID. In Proceedings of the 2nd IEEE International Conference on Industrial Informatics
(INDIN04), Berlin, Germany, 2426 June 2004.
34. Xiao, J.; Ramdath, K.; Losilevish, M.; Sigh, D.; Tsakas, A. A low cost outdoor assistive navigation system for
blind people. In Proceedings of the 2013 8th IEEE Conference on Industrial Electronics and Applications
(ICIEA), Melbourne, Australia, 1921 June 2013; pp. 828833.
35. Fonseca, R. Electronic long cane for locomotion improving on visual impaired people: A case study.
In Proceedings of the 2011 Pan American Health Care Exchanges (PAHCE), Rio de Janeiro, Brazil,
28 March1 April 2011.
36. Landa-Hernndez, A.; Bayro-Corrochano, E. Cognitive guidance system for the blind. In Proceedings of the
IEEE World Automation Congress (WAC), Puerto Vallarta, Mexico, 2428 June 2012.
37. Benjamin, J.M. The Laser Cane. J. Rehabil. Res. Dev. 1974, 10, 443450.
38. Kumar, K.; Champaty, B.; Uvanesh, K.; Chachan, R.; Pal, K.; Anis, A. Development of an ultrasonic cane
as a navigation aid for the blind people. In Proceedings of the 2014 International Conference on Control,
Instrumentation, Communication and Computational Technologies (ICCICCT), Kanyakumari District, India,
1011 July 2014.
39. Saputra, M.R.U.; Santosa, P.I. Obstacle Avoidance for Visually Impaired Using Auto-Adaptive Thresholding
on Kinects Depth Image. In Proceedings of the IEEE 14th International Conference on Scalable
Computing and Communications and Its Associated Workshops (UTC-ATC-ScalCom), Bali, Indonesia,
912 December 2014.
40. Jassim, F.A.; Altaani, F.H. Hybridization of Otsu Method and Median Filter for Color Image Segmentation.
Int. J. Soft Comput. Eng. 2013, 3, 6974.
41. Ahlmark, I.; Hakan Fredriksson, D.; Hyyppa, K. Obstacle avoidance using haptics and a laser rangefinder.
In Proceedings of the 2013 IEEE Workshop on Advanced Robotics and its Social Impacts (ARSO), Tokyo,
Japan, 79 November 2013.
Sensors 2017, 17, 565 41 of 42

42. SenseGrapics AB, Open Source HapticsH3D.org. Available online: http://www.h3dapi.org/ (accessed on
18 June 2016).
43. Tapu, R.; Mocanu, B.; Zaharia, T. A computer vision system that ensure the autonomous navigation of
blind people. In Proceedings of the IEEE E-Health and Bioengineering Conference (EHB), Iasi, Romania,
2123 November 2013.
44. Tapu, R.; Mocanu, B.; Zaharia, T. Real time static/dynamic obstacle detection for visually impaired persons.
In Proceedings of the 2014 IEEE International Conference on Consumer Electronics (ICCE), Las Vegas, NV,
USA, 1013 January 2014.
45. Prudhvi, B.R.; Bagani, R. Silicon eyes: GPS-GSM based navigation assistant for visually impaired using
capacitive touch braille keypad and smart SMS facility. In Proceedings of the 2013 World Congress on
Computer and Information Technology (WCCIT), Sousse, Tunisia, 2224 June 2013.
46. Fradinho Oliveira, J. The path force feedback belt. In Proceedings of the 2013 8th International Conference
on Information Technology in Asia (CITA), Kuching, Malaysia, 14 July 2013.
47. Shilkrot, R.; Huber, J.; Liu, C.; Maes, P.; Nanayakkara, S.C. Fingerreader: A wearable device to support
text reading on the go. In Proceedings of the CHI14 Extended Abstracts on Human Factors in Computing
Systems, Toronto, ON, Canada, 26 April1 May 2014.
48. Nanayakkara, S.; Shilkrot, R.; Yeo, K.P.; Maes, P. EyeRing: A finger-worn input device for seamless
interactions with our surroundings. In Proceedings of the 4th Augmented Human International Conference,
Stuttgart, Germany, 78 March 2013.
49. Black, A.W.; Lenzo, K.A. Flite: A small fast run-time synthesis engine. In Proceedings of the ITRW on Speech
Synthesis, Perthshire, Scotland, 29 August1 September 2001.
50. Smith, R. An overview of the tesseract OCR engine. In Proceedings of the ICDAR, Paran, Brazil,
2326 September 2007; pp. 629633.
51. Aladren, A.; Lopez-Nicolas, G.; Puig, L.; Guerrero, J.J. Navigation Assistance for the Visually Impaired Using
RGB-D Sensor with Range Expansion. IEEE Syst. J. 2016, 10, 922932. [CrossRef]
52. Kiryati, N.; Eldar, Y.; Bruckstein, M. A probabilistic Hough transform. Pattern Recogn. 1991, 24, 303316.
[CrossRef]
53. Olmschenk, G.; Yang, C.; Zhu, Z.; Tong, H.; Seiple, W.H. Mobile crowd assisted navigation for the visually
impaired. In Proceedings of the 2015 IEEE 15th International Conference on Scalable Computing and
Communications and Its Associated Workshops (UIC-ATC-ScalCom), Beijing, China, 1014 August 2015.
54. Yi, Y.; Dong, L. A design of blind-guide crutch based on multi-sensors. In Proceedings of the 2015
12th International Conference on Fuzzy Systems and Knowledge Discovery (FSKD), Zhangjiajie, China,
1517 August 2015.
55. Aymaz, S.; avdar, T. Ultrasonic Assistive Headset for visually impaired people. In Proceedings of the
2016 39th International Conference on Telecommunications and Signal Processing (TSP), Vienna, Austria,
2729 June 2016.
56. Everding, L.; Walger, L.; Ghaderi, V.S.; Conradt, J. A mobility device for the blind with improved vertical
resolution using dynamic vision sensors. In Proceedings of the 2016 IEEE 18th International Conference on
e-Health Networking, Applications and Services (Healthcom), Munich, Germany, 1416 September 2016.
57. Ghaderi, V.S.; Mulas, M.; Pereira, V.F. S.; Everding, L.; Weikersdorfer, D.; Conradt, J. A wearable mobility
device for the blind using retinainspired dynamic vision sensors. In Proceedings of the 37th Annual
International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan, Italy,
2529 August 2015.
58. Mueggler, E.; Forster, C.; Baumli, N.; Gallego, G.; Scaramuzza, D. Lifetime estimation of events from Dynamic
Vision Sensors. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation
(ICRA), Seattle, WA, USA, 2630 May 2015.
59. Nancy Owano. Dynamic Vision Sensor Tech Works Like Human Retina. Available online: http://phys.org/
news/2013--08-dynamic-vision-sensor-tech-human.html (accessed on 13 August 2016).
60. Mocanu, B.; Tapu, R.; Zaharia, T. When Ultrasonic Sensors and Computer Vision Join Forces for Efficient
Obstacle Detection and Recognition. Sensors 2016, 16, 1807. [CrossRef]
61. Martinez-Sala, A.S.; Losilla, F.; Snchez-Aarnoutse, J.C.; Garca-Haro, J. Design, implementation and
evaluation of an indoor navigation system for visually-impaired people. Sensors 2015, 15, 3216832187.
[CrossRef]
Sensors 2017, 17, 565 42 of 42

62. Photonics, H. Characteristics and Use of Infrared Detectors. Available online: https://www.hamamatsu.
com/resources/pdf/ssd/infrared_kird9001e.pdf (accessed on 8 February 2017).
63. McCathie, L. The Advantages and Disadvantages of Barcodes and Radio Frequency Identification in Supply Chain
Management; University of Wollongong: Wollongong, NSW, Australia, 2004.
64. Andersen, M.R.; Jensen, T.; Lisouski, P.; Mortensen, A.K.; Hansen, M.K.; Gregersen, T.; Ahrendt, P. Kinect
Depth Sensor Evaluation for Computer Vision Applications; Electrical and Computer Engineering Technical
Report ECE-TR-6; Aarhus University: Aarhus, Denmark, 2012.
65. Neto, L.B.; Grijalva, F.; Maike, V.R.M.L.; Martini, L.C.; Florencio, D.; Baranauskas, M.C.C.; Rocha, A.;
Goldenstein, S. A Kinect-Based Wearable Face Recognition System to Aid Visually-impaired Users.
IEEE Trans. Hum.-Mach. Syst. 2016, 47, 5264. [CrossRef]
66. AIRMAR, Tech. Overview for Applying Ultrasonic Technology (AirducerTM Catalog). Available online:
www.airmar.com (accessed on June 2016).
67. Pieralisi, M.; Petrini, V.; Di Mattia, V.; Manfredi, G.; De Leo, A.; Scalise, L.; Russo, P.; Cerri, G. Design and
realization of an electromagnetic guiding system for blind running athletes. Sensors 2015, 15, 1646616483.
[CrossRef]
68. Pieralisi, M.; Di Mattia, V.; Petrini, V.; De Leo, A.; Manfredi, G.; Russo, P.; Scalise, L.; Cerri, G.
An Electromagnetic Sensor for the Autonomous Running of Visually-impaired and Blind Athletes (Part I:
The Fixed Infrastructure). Sensors 2017, 17, 364. [CrossRef] [PubMed]

2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).

Anda mungkin juga menyukai