Anda di halaman 1dari 73


Hidden – Line - Surface - Solid removal algorithms
shading – coloring. Introduction to parametric and
variational geometry based software’s and their
principles creation of prismatic and lofted parts using
these packages.
 Many computer graphics applications involve the display of 3D

 CAD systems allow their users to manipulate models of machined

components, automobile bodies and aircraft parts.

 In applications like simulation, a high degree of realism of display

of objects may be essential to the program’s success.

 Producing a realistic image of a 3D scene on a 2D display presents

many problems.
 How is depth, the third dimension, to be displayed on the
 How are parts of objects that are hidden by other objects to be
identified and removed from the image?
 How can lighting, colour, shadows contribute to the
Techniques for Achieving Realism

 It is impractical to produce an image that is a perfectly realistic

representation of an actual scene.

 Instead, we need techniques that take into account

 The different kinds of realism needed by


 The amount of processing required to generate the image.

 The capabilities of the display hardware.

 The amount of detail recorded in the model of the

Depth Cueing
 The basic problem addresses by visualization techniques is
sometimes called depth cueing.

 When a 3D scene is projected onto a 2D display screen,

information about the depth of objects in the images tends to be
reduced or lost entirely.

 Techniques that provide depth cues are designed to restore or

enhance the communication of depth to the observer.
Modeling Information and Realism

 Adding details to a model increases the realism of the image.

 Excessive detail may become confusing.


Hidden – Line - Surface - Solid removal algorithms
shading – coloring. Introduction to parametric and
variational geometry based software’s and their
principles creation of prismatic and lofted parts using
these packages.
Hidden Line Removal

 Removal of the hidden parts, that wouldn’t be seen if objects were

constructed from opaque material in real life.

 Dependent on viewing direction.

 Determination of hidden edges and surfaces is one of the most

challenging task in computer graphics.

 Requires a lot of computer time and memory space.

Hidden Line Elimination
 The lines that are hidden from view are removed from the image.

Techniques used:

1. MINIMAX TEST (Overlap or bounding box test)

 This test checks if two polygons overlap.

 It surrounds each polygon with a box by finding its extents (minimum

and maximum x and y coordinates).

 Then checks for the intersection for any two boxes in both X and Y

Minimax test for typical polygon and edges

Containment Test (Surroundedness)

 The containment test checks whether a given point lies inside a given

 Draw a line from the point, is intersected with the polygon edges.

 If the intersection count is even, the point is outside the polygon.

 If it is odd, the point is inside.

Computing Silhouettes
 A set of edges that separates visible faces from invisible faces
of an object with respect to a given viewing direction is called
silhouette edges or silhouettes.

 An edge that is part of the silhouette is characterized as the

intersection of one visible face and invisible face.

 An edge that is intersection of two visible faces is visible, but

does not contribute to the silhouette.

 The intersection of two invisible faces produces an invisible


Hidden Line Removal Algorithm- Three Approaches

1. Edge oriented approach.

 Test of all edges against all surfaces.
 It is inefficient because it tests all the edges, whether they
intersect or not.

2. Silhouette approach
 Testing all the edges against all the silhouette edges only.

3. Area oriented approach.

 It calculates the silhouette edges and connects them to form
Priority Algorithm- Hidden Line Removal
 Priority algorithm is also called the depth or z- algorithm.
 ASSIGNMENT OF PRIORITIES: sorting the faces (polygons)
according to the largest z co-ordinate value of each.
 If a face intersects more than one face other visibility tests are used.
1. Utilize the proper orthographic projection to obtain
the desired view (whose hidden lines are to be
removed) of the scene. To perform the depth test,
the plane equation of any face (polygon) in the
image can be obtained by
2. Face list will be stored to assign priorities. For the given figure,
six faces F1-F6 form such a list.

3. Assign priorities to the faces in the face list. Priority

assignment is determined by comparing two faces at any one
time. The priority list is continuously changes, and the final list is
obtained after few iterations.
• The first face in the face list is assigned the highest priority 1.

• F1 is intersected with the other faces in the list, that is, F 2-F6.

• The intersection between F1 and F4 is an area A.

• The intersection between F1 and F2 is an edge.

• No intersection or empty set between faces F 1 and F6.

2. F1 intersects F2 and F3 in edges. Therefore both faces are assigned
priority 1.
3. F1 and F4 intersect in an area. The depth of F 4 is less than that of
F1. F4 is assigned priority 2.
4. No intersection between F1 and F5, no priority assignment is
possible for F5.
5. Face F1 is moved to the end of the face list, and the sorting
process to determine priority started all over again.
6. In iteration 4, faces F4 to F6 are assigned the priority 1 first. When
F4 is intersected with F1, the F1 has higher priority. Thus, F1 is
assigned priority 1 and the priority of F4 to F6 is dropped to 2.
7. Reorder the face and priority lists so that the highest priority is on
top of the list. In this case, the face and priority lists are
[F1,F2,F3,F4,F5,F6] and [1,1,1,2,2,2] respectively.
Area oriented algorithm
Area oriented algorithm
• Identify silhouette polygons
Area oriented algorithm
• Assign quantitative hiding (QH)values to
edges of silhouette polygons.
Area oriented algorithm
• Determine the visible silhouette segments.
Area oriented algorithm
• Intersect the visible silhouette polygons with
partially visible faces
Area oriented algorithm
• Display the interior of the visible or partially
visible polygons

Hidden – Line - Surface - Solid removal algorithms
shading – coloring. Introduction to parametric and
variational geometry based software’s and their
principles creation of prismatic and lofted parts using
these packages.
Hidden surface removal
• Drawing polygonal faces on screen
consumes CPU cycles
– Illumination
• We cannot see every surface in
– We don’t want to waste time rendering
primitives which don’t contribute to
the final image.
Visibility (hidden surface
• A correct rendering requires correct visibility
• Correct visibility
– when multiple opaque polygons cover the same
screen space, only the closest one is visible
(remove the other hidden surfaces)

– wrong visibility correct visibility

Visibility of primitives
• A scene primitive can be invisible for 3 reasons:
– Primitive lies outside field of view
– Primitive is back-facing
– Primitive is occluded by one or more objects nearer
the viewer
Visible-Surface Detection

Two main types of algorithms:

Object space: Determine which part of the object
are visible
Image space: Determine per pixel which point of
an object is visible

Object space Image space

Visible surface algorithms.

•Object space techniques: applied before vertices are mapped to pixels

•Back face culling, Painter’s algorithm, BSP trees

•Image space techniques: applied while the vertices are rasterized

Hidden Surface Removal
1. Z- buffer algorithm
2. Watkin’s algorithm
3. Wornock’s algorithm
4. Painter’s algorithm

Z- buffer algorithm
 This is also known as the depth buffer algorithm.
 z values can be stored for each pixel in the frame buffer.
 For each polygon in the scene, find all the pixels (x,y) that lie
inside or on the boundaries of the polygon when projected onto
the screen.
 For each of theses pixels, calculate the depth z of the polygon at
 If z > depth (x,y), the polygon is closer to the viewing eye than
others already stored in the pixel.
Z-Buffer Algorithm

• As we render each polygon, compare the depth of

each pixel to depth in z buffer
• If less, place shade of pixel in color buffer and update z
Why is z-buffering so popular ?
• Simple to implement in hardware.
– Memory for z-buffer is now not expensive
• Diversity of primitives – not just polygons.
• Unlimited scene complexity
• Don’t need to calculate object-object intersections.
• Extra memory and bandwidth
• Waste time drawing hidden objects
• Z-precision errors
• May have to use point sampling
Painter’s Algorithm

• Render polygons in back to front order so that

polygons behind others are simply painted over

B behind A as seen by viewer Fill B then A

Depth-Sorting Algorithm
A polygon S can be drawn if all remaining polygons S’
satisfy one of the following tests:

1. No overlap of bounding rectangles of S and S’

2. S is completely behind plane of S’
3. S’ is completely in front of plane of S
4. Projections S and S’ do not overlap
Depth-Sorting Algorithm

1. No overlap of bounding rectangles of S and S’

yv S

xv xv

Depth-Sorting Algorithm

2. S is completely behind plane of S’

Substitute all vertices of S in plane equation S’, and
test if the result is always negative.

yv S

xv xv

Depth-Sorting Algorithm

3. S’ is completely in front of plane of S

Substitute all vertices of S’ in plane equation of S, and
test if the result is always positive

yv S

xv xv

Depth-Sorting Algorithm

4. Projections S and S’ do not overlap

yv S

xv xv

Depth-Sorting Algorithm

If all tests fail: Swap S and S’,

and restart with S’.


S S’’



Hidden – Line - Surface - Solid removal
algorithms - shading – coloring. Introduction to
parametric and variational geometry based software’s
and their principles creation of prismatic and lofted
parts using these packages.
Hidden Solid Removal
In the hidden solid removal, display of solid models with hidden lines
or surfaces removed.

Ray –Tracing Algorithm

Ray tracing algorithm for solid models consists of three main
 Ray / Primitive intersection
 Ray / Primitive classification
 Ray / Solid classification
Ray/Primitive intersection
Ray enters and exists the solid via the faces and the
surfaces of the primitives.

Possible outcomes are:

 No intersection-ray misses the primitives.
 Ray is tangent to the primitives –touches at one point.
 Ray intersects the primitive at two different points.
Ray / Primitive classification

 Ray/primitive intersection points, the “in”, “out” and “on”

segments of the ray can be found.
 If the ray intersects the primitive in two different points, it is
divided into three segments.
out - in – out.
 If the ray lies on a face of the primitive, it is classified as
out – on – out.
Ray / Solid classification
 Ray / Solid classification produces the “in” and “out” segments of
the ray with respect to the solid.
The combine operation is a three step process:

 First, the ray/primitive intersection points from left and right subtrees
are merged , forming a segmented ray.

 Second, the segments of the ray are classified according to the

boolean operator and the classification.

 Third, the ray is simplified by merging segments with the same


Hidden – Line - Surface - Solid removal algorithms –
shading – coloring. Introduction to parametric and
variational geometry based software’s and their
principles creation of prismatic and lofted parts using
these packages.
 Line drawings are limited in their ability to represent complex shapes.
Hence we adopt shaded images.

 Shaded color images convey shape information that cannot be

represented in line drawings.

 Shaded images can also convey features other than shape such as
surface finish or material type (plastic or metallic look).

 In shading a scene , a pin hole camera model is used.

 Shading process must take into account the position and color of the
light sources and the position, orientation and surface properties of
the visible objects.
 Shading models simulate the way
visible surfaces of objects reflect

 They determine the shade of a point

of an object in terms of light sources,
surface characteristics, and the
positions and orientations of the
surfaces and sources.
Light source in shading

 Two types of light sources can be identified:

 Point light source and Ambient light.
 Objects illuminated with point light sources appear harsh, because
they are illuminated from one direction only.
 Ambient light is a light of uniform brightness and is caused by the
multiple reflections of light from many surfaces.
 The input to a shading model is intensity and color of light source,
surface characteristics at the point to be shaded, and the positions
and orientations of surfaces and sources.
 The output from a shading model is an intensity value at the
 Shading models are applicable to points only. To shade an object, a
shading model is applied many times to many points on the object.
Interaction of light with matter
Consider point light sources shining on surface of objects. (Ambient
light adds a constant intensity value to the shade at every point.)
The light reflected off a surface can be divided into two parts- diffuse
& specular.
When light hits an ideal diffuse surface, it is re-radiated equally in all
directions, so that the surface appears to have the same brightness from
all viewing angles.
Ideal specular surfaces re-radiate light in only one direction, the
reflected light direction.
Physically, the difference between these two components is that diffuse
light penetrates the surface of an object and is scattered internally
before emerging again while specular light bounces off the surface.
Real objects contain both diffuse and specular components, and
both must be modeled to create realistic images.
Specular Reflection

 Specular reflection is a characteristic of shiny surfaces.

 Shiny surface appears depend on the directions of the light source and
the viewing eye.
Shading Surfaces

 Once we know how to shade a point, we can consider how to shade a

 Relevant points on the surface have the same location in screen
coordinates as the pixels of the raster display.
 The important shading algorithms are –
 Constant shading,
 Gourand shading or first derivative shading and
 Phong or second derivative shading.
Constant Shading:

This is the simplest and least realistic shading algorithm. An entire

polygon has a single intensity. Constant shading makes the polygonal
representation obvious and produces unsmooth shaded images.

This type of shading is also called flat shading or faceted shading.

 Surface normal is determined, illuminated.

Assumptions are:
 The object is modeled using plane polygonal surfaces only.

 The light sources are only at infinity.

 The view point is also at infinity.

Gourand Shading:

It is a popular form of intensity interpolation or first-derivative shading. It

proposed a technique to eliminate intensity discontinuities caused by
constant shading. The steps involved are:
1.Calculation of surface normals.
2.Calculation of vertex normals.
3.Computation of vertex intensities using the vertex normals.
4.Computation of the shade of each polygon by linear interpolation of vertex
Gourand shading takes longer than constant shading and requires more planes
of memory to get the smooth shading for each color.

Surface normals Intensity interpolation along polygon edges

Surface normal to different polygonal faces

Vertex normal
12 bit of colour output for shading
Phong Shading:
Phong Shading overcomes all the problems
of Gourand Shading, although it requires
more computational time.

The basic idea behind this is to interpolate

normal vectors at the vertices instead of the
shade intensities and to apply the shading
model at each point (pixel).

To perform the interpolation to obtain an

average normal vector at each vertex.

Hidden – Line - Surface - Solid removal algorithms –
shading – coloring. Introduction to parametric and
variational geometry based software’s and their
principles creation of prismatic and lofted parts using
these packages.

The use of colors in CAD/CAM has two main objectives:

• Facilitate creating geometry

• Display images

Colors can be used in geometric construction. In this case various

wireframe, surface, or solid entities can be assigned different colors to
distinguish them.

Color is one of the two main ingredients (the second one being texture) of
shaded images produced by shading algorithms.

In some engineering applications such as finite element analysis, colors

can be used effectively to display contour images such as stress or heat-
flux contours.
Color Properties
Color descriptions and specifications generally include three
properties: Hue, Saturation and Brightness.

Hue – Associates a color with some position in the color spectrum .

Red, Green and Yellow are Hue names.

The frequency that is dominant for any picture or surface is

called the dominant frequency or wavelength or hue of the light.

Saturation – Describes the clarity or purity of a color or it describes

how diluted the color is by white light.

Pure spectral colors are fully saturated and grays are de-
saturated colors.

Brightness – It is related to the intensity value, or lightness of the

Color Models
 A color model or a space is a 3D color coordinate system to allow
specifications of colors within some color range.

 Each displayable color is represented by a point in the color model.

 These models are based on the red, green, and blue (RGB).

 For all the models, coordinates are translated into three voltage values in
order to control the display.

Some of the popular color models are:

• RGB model
• CMY model
• YIQ model
• HSV model
• HSL model
RGB Color Model

 The RGB color space uses a Cartesian coordinate system.

 Any point (color) in the space is obtained from the three
RGB primaries; that is, the space is additive.
 The main diagonal of the cube is the locus of equal amounts
of each primary and therefore represents the gray scale or
 Thus in the RGB model, the lowest intensity (0 for each
color) produces the black color and the maximum
intensity (1 for each color) produces the white color.
RGB Color Model
CMY Model
The CMY (cyan, magenta, yellow) model is the
complimentary of the RGB model.

 The cyan, magenta, and yellow colors are the

compliments of the red, green and blue

 The white color is at the origin (0,0,0) of the

model and the black color is at the point (1,1,1)
which is the opposite of the RGB model. = -

 The CMY is considered a subtractive model because the model primary colors
subtract some color from white light.

 For example, a red color is obtained by subtracting a cyan color from white light
(instead of adding magenta and yellow).

 The unit column vector represents white in the RGB model or black in the
CMY model.
CMY Model

CMY model is used in printing technology.

YIQ Model
The YIQ space is used in raster color graphics.

It was designed to be compatible with black and white television broadcasts.

The Y axis of the color model corresponds to the luminance (the total amount of

The I axis encodes chrominance information along a blue-green to orange vector

and the Q axis encodes chrominance information along a yellow-green to magenta

The conversion from YIQ coordinates to RGB coordinates is defined by the following

= +
HSV (hue, saturated, value) Model
This color model is user oriented because it is based in what artists use to
produce colors (hue, saturation, and value).

It is contrary to the RGB, CMY and YIQ models which are hardware oriented.
The model approximates the perceptual properties of hue, saturation, and value.

The conversion from HSV coordinates to RGB coordinates can be defined as


The hue value H (range from 0⁰ to 360⁰) defines the angle of any point on or
inside the single hexacone.

Each side of the hexacone bounds 60⁰.

HSV Model
HSL Model
The HSL (hue, saturation, lightness)
color model forms a double hexacone

The saturation here occurs at V = 0.5

and not 1.0 as in the HSV model.

The HSL model is as easy to use as

the HSV model.

 The conversion from HSL to RGB is

possible by using the geometry of the
double hexacone as in the HSV model.