Anda di halaman 1dari 5

GANESH A/l KRISHNAN

08UEB04834

1. Describe in detail the various components of a Machine Vision System with examples
and illustrations.
Machine Vision System, generally, consist of input source, optics, lighting, a part sensor, a frame
grabber, PC platform, inspection software and digital I/O.
Input source of machine vision are normally objects or scene that is captured by optics part of the
machine vision system. Example of input source is face of a person that is going to be captured
by a image capturing device.
Optics in the machine vision system are basically digital cameras, sensors and other image
capturing devices. Example of image capturing device is OmniVision`s OV10810 which is a
CMOS image sensor that is able to perform a 1080p HD video recording and capture a 10
megapixel quality image.
Lighting is also an important part of Machine Vision System as lighting enable the image that is
captured to be differentiated in image quantization process for further processing stage. One of
the example of lighting used is Smart Series HI-BRITE Illuminators, This lighting equipment has
a high intensity output with the help of LED vision lighting technology and it is versatile for 10
degree spot and 50 degree flood lens option which is very useful in automation production lines.
Frame grabber of the machine vision system is very important as it functions to grab and image
to be processed by the algorithm and one of the example of frame grabber is GRABLINK Full.
PC platform and inspection software in machine vision system is important as it functions to
perform the algorithm to process the image. Example of software that is used to perform such
algorithm is OpenCV and Microsoft Visual Studio.
Digital I/O or digital input and output of the system in machine vision is important to make the
processing of the image faster and better. Most digital I/O present on frame grabber and PC
platform used in machine vision system

2. How does lighting play an important role in designing machine vision systems? What
are the different types of lighting techniques used in vision systems? Explain in detail
with suitable examples.
Lighting plays an important role in designing machine vision system by maximize contrast of the
feature and minimize the contrast of the rest which enable the machine to identify and
differentiate imager and faster processing time
Among the type of lighting technique used in vision system is front lighting, back lighting, and
structured lighting. Front lighting, generally, can be divided into a few type of lighting which is
ring shape lighting, spot lighting, tube lighting, and area type lighting.

GANESH A/l KRISHNAN

08UEB04834

Type of front lighting


Ring shape

Usage
Used to detect loose caps in production line

Spot lighting
Tube lighting

Used to check chip orientation on an embrossed


tape
Used to detect stain on sheets

Area type lighting

Used to detect hole position in lead frame

3. What is noise? Describe with illustrations the operation of some of the linear and nonlinear filters.
Noise can be considered as unwanted data without meaning or in other words, we can say as data
that is not being used to transmit signal but it is a product of unwanted by-product of other
activities. Noise occurs due to quantization which reduces the light level to 256, imperfect
sensors, imperfect lighting condition during acquisition and due to compression formats.
Linear filters are filter which are focused on modifying pixels based on neighbourhood. In linear
filter operation, we are allowed to combine the method in any order to achieve the same result
which indirectly makes the operation to be easier to undo in case of any error. Basically, in linear
filter method, the larger the neighbourhood results in larger convolution mask and greater degree
of filtering which reduce a great amount of noise that appears in the image processing process.
For example, mean filter is one of the example of linear filter and its filter operation is done by
replacing each pixel in a window given with the value of average of all values in the local
neighbourhood.
Operation of mean filter is done by using this formula:

h[i, j ]

1
M

f [k , l ]

( k ,l )N

Example of mean filter operation:


5
2
8

9 *

GANESH A/l KRISHNAN

08UEB04834

3X3 neighbourhood

3X3 neighbourhood

(before processing)

(after processing)

Non-linear filter, on the other hand, is a signal processing which the output is not a linear
function of its input. This filter operation does not depend on values which are significantly
different from typical values in the neighbourhood. An example of non-linear filter is median
filter. Median filter replaces the centre values of the window with median of all pixel in the
window. This filter is very helpful in removing `impulse noise which is also known as high or
low value. Median filter is very effective in preserving step edges without blurring the image.
Example of median filter operation:
3

57

18

*
7
*
6
*
9

2,3,4,5,6,7,9,18,57

4. Discuss in detail the difference between a CCD imager and a CMOS imager. Explain the
role of frame grabbers in vision systems and describe the various functions of present day
frame grabbers.
CMOS image sensor is also called as Complementary Metal Oxide Semiconductor while CCD
image sensor is called as Charge Coupled Device in long. Both CCD and CMOS image sensor
has a lot difference in various term. Among the differences is its pixels charge transfer. In CCD
image sensor, every pixel`s charge transfer through a limited number of output nodes to be
converted into a voltage while in CMOS image sensor each pixel has its own charge-to-voltage
conversion system. On the other hand, CCD image sensor has its pixels buffered and sent offchip as an analogue signal while CMOS image sensor has its pixel amplified and has its chip
output as a digital signal. The output of CCD image sensor is uniform compared to the output
from CMOS image sensor which has a not uniform output. There is also a great contrast in usage
of both the type of camera where CCD image sensor is used in cell phone camera while CMOS
image sensor is more often used in professional and industrial cameras.
Frame grabber, generally, plays the role as a device to acquire or grab and convert analogue to
digital images. Recent days frame grabber has many additional features such as storage and
multiple camera links and few additional features. An example of recent day frame grabber is
GRABLINK Full. This frame grabber works with a Camera Link camera application. The frame

GANESH A/l KRISHNAN

08UEB04834

grabber support one Base-Configuration, Medium-Configuration, or Full-Configuration camera.


One of the additional features of this frame grabber is it support 4-lane PCI express bus which
enable a high end image acquisition with high speed and high resolution area scan and line scan
application for printing, 3D inspection and manufacturing inspection which is suitable for fast
lane production lines. It has 10 digital I/O lines compatible with wide range of encoder and
sensors. The frame grabber also has an internal memory of 128MB on board.
5. What is convolution? With illustrations, explain the Sobel and Canny edge detectors

Convolution is a process where a neighbourhood operation in which each output pixel is


weighted sum of neighbouring input pixel.
Sobel edge detector is a first order derivative edge detector. Sobel edge detector detects edges at
small scales. This operator is sensitive to high frequency noise in the image that generates local
edge data instead of recovering global structure of a boundary. Unfortunately it only detects a
spatial scale that fit 3x3 windows.

Sobel operator uses a standard formula which is:


M s x2 s y2

a0

a1

a2

a7

[i,j]

a3

a6

a5

a4

s x (a 2 ca3 a 4 ) (a 0 ca 7 a 6 )
s y (a 0 ca1 a 2 ) (a 6 ca5 a 4 )

On the other hand, Canny edge detector is second order derivative edge detector. Generally, this
method proposes a new edge detector that is more efficient for step edges corrupted by white
noise. This method can be broke down into three criteria which is detection criteria where
according to this criteria important edges should not be missed. Another criteria of the method is
the localization criteria. According to this criteria, the distance between actual and located
position of edge should be minimal. Next and the last criteria of the method is the one response
where this criteria specifies more to minimize multiple response to single edge.

GANESH A/l KRISHNAN

08UEB04834

Anda mungkin juga menyukai