Anda di halaman 1dari 32

Shading

Shading merupakan metode atau teknik dalam rendering (pembentukan gambar yang mengandung
model geometris untuk menghasilkan gambar yang lebih realistis). Pemberian bayangan (shading)
merupakan proses penentuan warna dari semua pixel yang menutupi permukaan menggunakan
model illuminasi (pencahayaan). Metodenya melliputi:

· Penentuan permukaan tampak pada setiap pixel

· Perhitungan normal pada permukaan

· Mengevaluasi intensitas cahaya dan warna menggunakan model illuminasi.

Salah satu cara untuk menampilkan objek 3 dimensi agar terlihat nyata adalah dengan menggunakan
shading. Shading adalah cara menampilkan objek 3 dimensi dengan mewarnai permukaan objek
tersebut dengan memperhitungkan efek-efek cahaya. Efek-efek cahaya yang dimaksud adalah
ambient, diffuse, dan specular. Metode shading yang digunakan adalah Flat Shading, Gouraud
Shading, dan Phong Shading. Untuk Flat Shading, perhitungan warna dilakukan satu kali karena
dalam 1 face tidak terjadi gradasi warna, untuk Gouraud Shading, pewarnaan vertex dilakukan pada
tiap vertex sehingga tampak gradasi warnanya. Dan untuk Phong Shading, pewarnaan dilakukan
pada tiap garis hasil scanline pada face sehingga gradasi tampak lebih halus.

Sebuah objek akan memberikan sifat yang berbeda pada saat dikenai cahaya. Ada yang
memantulkan, membiaskan ataupun menyerap cahaya. Selain itu, ada objek yang apabila dikenai
cahaya maka akan menimbulkan bayangan.

Bayangan timbul disebabkan oleh adanya sumber cahaya. Di sekitar kita banyak didapati pelbagai
sumber cahaya, contohnya : cahaya matahari, neon, lampu pijar dan lain sebagainya. Jenis cahaya
dari pelbagai sumber cahaya sering diasumsikan dan dimodelkan dengan cahaya titik dan cahaya
lingkungan. Dengan dua jenis cahaya ini memudahkan pembuatan model bayangan dan pantulan.

Dalam model pencahayaan terdapat tiga model (direct lightning) bayangan, yaitu:

1) Flat shading

Pemberian bayangan rata (flat) merupakan cara termudah untuk dibuat. Bayangan rata mempunyai
karakteristik sebagai berikut :

· Pemberian tone yang sama untuk setiap polygon

· Penghitungan jumlah cahaya mulai dari titik tunggal pada permukaan

· Penggunaan satu normal untuk seluruh permukaan.

Contoh gambar Flat Shading:


1) Gouraud shading

Sebuah teknik yang dikembangkan oleh Henri Gouraud pada awal tahun 1970. Teknik ini
menampilkan kesan gelap terang dari sebuah permukaan objek dengan memperhitungkan warna
dan penyinaran dari tiap sudut segitiga. Gouraud shading adalah metode rendering sederhana jika
dibandingkan dengan Phong shading. Teknik ini tidak menghasilkan efek shadow dan
refleksi. Metode ini digunakan dalam grafik komputer untuk mensimulasikan efek cahaya yang
berbeda dan warna di permukaan benda.Dalam prakteknya, Gouraud shading digunakan untuk
mencapai pencahayaan halus rendah-poligon permukaan tanpa berat menghitung kebutuhan
komputasi pencahayaan untuk setiap pixel.

Contoh gambar Gouraud Shading:


1) Phong shading

Phong shading mengacu pada seperangkat teknik dalam komputer grafis 3D. Phong shading meliputi
model bagi refleksi cahaya dari permukaan dan metode yang kompatibel memperkirakan pixel
warna oleh interpolating permukaan normal di rasterized poligon. Model refleksi juga mungkin
disebut sebagai refleksi Phong model, Phong Phong iluminasi atau pencahayaan.Ini mungkin disebut
Phong shading dalam konteks pixel shader, atau tempat lain di mana perhitungan pencahayaan
dapat disebut sebagai "shading". Metode interpolasi juga mungkin disebut Phong interpolasi, yang
biasanya disebut dengan "per-pixel pencahayaan".Biasanya disebut "pelindung" bila dibandingkan
dengan metode interpolasi lain seperti Gouraud pelindung atau flat shading. Refleksi yang Phong
model tersebut dapat digunakan bersama dengan salah satu metode interpolasi. Metode ini
dikembangkan oleh Phong Bui Tuong di Universitas Utah.
Secara Garis besar, gambar perbedaan dari Flat shading, Gouraud shading dan Phong shading:

https://slideplayer.info/slide/3994727/

Presentasi berjudul: "RENDERING (Shading & Shadow)"— Transcript presentasi:

1 RENDERING (Shading & Shadow)


Grafika Komputer (Defiana Arnaldy, M.Si)

2 Shading

3 1. Pengenalan Shading Shading merupakan sebuah teknik gabungan antara pewarnaan dan
pencahayaan terhadap polygon, titik, garis untuk menampilkan efek gelap, terang, halus ataupun
kasar pada bagian tertentu dari permukaan objek. Non-Shading Non-Shading No Edge lines Shading

4 2. Jenis-Jenis Shading A. Flat Shading


Flat shading disebut juga dengan konstan shading, yaitu teknik shading yang memberikan satu jenis
warna pada setiap polygon dan hanya memerlukan satu kali kalkulasi untuk pewarnaan. Tipe flat
shading diantaranya Lambertian Shading dan Uniform Shading. Penerapan Flat Shading pada
OpenGL : glShadeModel(GL_FLAT);
5 Karakteristik flat shading diantaranya :
Pemberian tone yang sama untuk setiap Polygon Penghitungan jumlah cahaya mulai dari titik
tunggal pada permukaan. Penggunaan satu normal untuk seluruhnya.

6 Secara umum flat shading dapat menghasilkan shading yang akurat dengan ketentuan sebagi
berikut :
Objek berbentuk polihendra (segi banyak), yaitu jaring yang mempunyai ruang terhingga dan
tertutup. Semua sumber cahaya jauh dari permukaan objek, maka N.L adalah tetap untuk semua
permukaan Polygon. Posisi penglihatan yang cukup jauh dari permukaan sehingga N.L adalah tetap
untuk semua permukaan Polygon.

7 B. Smooth Shading Smooth shading yaitu teknik yang akan menampilkan gradasi warna pada tiap
polygonnya. Gradasi ini diakibatkan dari interpolasi warna pada garis polygonnya (vertex).
Penerapan Smooth Shading dalam OpenGL : glShadeModel(GL_SMOOTH); Teknik ini dibagi menjadi
beberapa tipe, diantaranya : a) Gourad Shading Gourad Shading dipublikasikan oleh Hendy Gourad
pada tahun 1971, dan dibuat dengan tujuan untuk memberikan efek shading yang halus pada
polygon tiap titiknya tanpa memerlukan komputasi yang berat.

8 Gourad Shading menampilkan kesan gelap terang dari sebuah permukaan objek dengan
memperhitungkan warna dan penyinaran dari tiap sudut segitiga. Teknik ini juga tidak menghasilkan
efek shadow dan refleksi. b) Phong Shading Phong Shading dikembangkan oleh Bui Tuong Phong
pada tahun 1973 dalam disertasinya di Universitas Utah. Sama seperti Gourad Shading, hasil Phong
Shading akan memberikan gradasi warna pada permukaan objeknya. Kelebihannya : - Memberikan
efek shadow & refleksi - Memberikan warna tiap garisnya, sehingga lebih halus. Kekurangannya : -
Kalkulasi lama

9 Shadow

10 Pengenalan Bayangan Bayangan merupakan hasil proyeksi dari cahaya yang jatuh ke benda/objek
yang kemudian diteruskan ke dalam suatu bidang/permukaan. Bayangan bukan merupakan sebuah
objek. Lalu, bagaimana membuat sebuah bayangan dari suatu objek?.

11 Rumus Bayangan : - Bidang dianggap y = 0.


- Jika sumber cahaya terlalu jauh dari objek, maka :

12 Refferensi Pratiwi, Dian Berbagai Sumber

13 Selesai…

Demo bisa dilihat di :

http://www.cs.toronto.edu/~jacobson/phong-demo/
https://www.scratchapixel.com/lessons/3d-basic-rendering/phong-shader-BRDF :

The Phong Model, Introduction to the Concepts of


Shader, Reflection Models and BRDF
Contents

The Phong Model and the concepts of Illumination Models and BRDF

Source Code

Keywords: specular reflection, glossy, roughness, Phong reflection model, BRDF, reflection model.

We do not expect to be able to display the object exactly as it would appear in reality, with texture,
overcast shadows, etc. We hope only to display an image that approximates the real object closely
enough to provide a certain degree of realism. Bui Tuong Phong

Please read the following three chapters for a quick introduction to shading and rendering: A Light
Simulator, Light Transport and Shading.

The Phong Model

Before we dive into the concept of BRDF and illumination model, we will introduce a technique used
to simulate the appearance of glossy surface such as a plastic ball for instance. From there, it will
become easier to generalise the technique which is what the concept of BRDF and illumination or
reflection model are all about.

Figure 1: the specular highlights are just a reflection of the strongest sources of light in the scene
surrounding the ball. The ball is both diffuse and specular (shiny).

In the previous lesson, we learned about simulating the appearance of mirror like and diffuse
surfaces. But what about glossy surfaces? First you should note that the plastic ball example that we
have just mentioned, is more than just a purely glossy surface. As most materials, it can be described
as having both a diffuse component and a glossy component (the shiny or specular reflections that
you can see in the ball from figure 1). The reason why many materials exhibit this dual properties is
not always the same. In some cases, it is simply because the material is itself a composite of
different materials. For example a plastic ball can be made of some sort of flakes or small particles
acting as diffusers but glued together by a polymer that is itself acting as reflective (and often
transmissive) material. Though the flakes or small particles diffuse lights while the polymer reflects
light. In other cases, objects are made of several materials layered on top of each other. This is the
case of the skin of many fruits. An orange for example has a thick skin layer that is acting more like a
diffuse surface which is itself covered with a thin oily layer which is acting more like a specular or
reflective surface. In summary we can describe the appearance of many material has having a
diffuse component and a specular or glossy component. We could put this concept in a equation
form:

SP=diffuse()∗Kd+specular()∗Ks.

Where the term SP

here stands for the "shading at P". in this equation, the term diffuse() is nothing less than the diffuse
effect that we learned to simulate in the previous lesson The term specular() is new and will be used
to simulate the glossy appearance of the object. The strength of both effects or to say it differently
balancing one against the other can be controlled through the two parameter Kd and Ks

. In shading and the world of computer graphics these terms are given many names and have caused
a lot of ink to spill. You can look at them as the strength or gain of the diffuse and specular
component. By tweaking them, one can create a wild variety of effect. But how these two
parameters should be set is something we will look into later on. For now, let's focus on the specular
function itself.
Figure 2: the waves of this water surface breaks the reflection of the background scene.

Bui Tuong Phong was a promising researcher in the field of computer graphics who sadly past way in
1975 soon after he published his thesis in 1973. One of the ideas he developed in this thesis was that
indeed many materials could be computed from a sum of weighted diffuse and specular component.
Readers interested in learning what causes some surfaces to be glossy are invited to read the first
chapter of the previous lesson. Glossy highlights are just the reflection of lights sources by the
object. The phenomenon is similar to that of a perfect mirror-like surface reflecting an image of the
light sources or an image of the object from the scene, though rather than being perfectly smooth
like in the case of a mirror, the surface of a glossy material is slightly broken up (at the microscopic
scale) which causes light to be reflected in a slightly different direction than the mirror direction (the
waves of a water surface produce a similar effect as shown in figure 2). This has for effect to blur the
light reflected off of the surface of the object. Because the surface of a rough surface acts like a
broken mirror, computer graphics researchers like to define it as a collection of small mirrors which
they also call micro-facets.

The concept of micro-facets here is purely a view of the mind and doesn't obviously "reflect" how
the surface of a rough surface actually looks. Though representing rough surfaces a collection of
micro-facets simplifies the resolution of mathematical equations which can then be used to simulate
the appearance of rough surfaces. This is the foundation of the micro-facet shading models. You can
find a lesson devoted to this topic in the next section.

Figure 3: the rougher the surface, the larger and the dimmer the specular reflection. You can see this
as the light being "blurred" across the surface of the object.

Two observations are worth making at this point:


 The glossy reflection of a light source is dimmer than the reflection of that same light source
by a mirror-like surface (we assume here that the viewer looks at the reflection of a light
source along the light rays reflection direction as shown in figure 3). The reason for this is
because only a fraction of the micro-facets (as we call them in CG) or small mirrors making
up the surface of our glossy object, reflects light in the viewing direction, while in the case of
a mirror like surface, all light rays are reflected in that direction (figure 3). In the case of a
rough surface, only a fraction of the rays are reflected towards the eye when the observer
looks down along the surface ideal reflection direction, hence the decrease in brightness.

 The brightness of a glossy reflection decreases as the angle between the view direction and
the ideal reflection direction increases. This is due to the fact that as this angle increases, the
number of mciro-facets reflecting light towards the eye decreases, hence the decrease in
brightness. Basically the probability of finding a micro-facet reflecting light in the direction of
the observer decreases as we walk away on the surface of the object from the point where
the reflection of the light was observed when the surface was a perfect mirror. This is a
statistical property of the way the micro-facets are distributed.

What's important in this last observation is basically that the brightness of the specular reflection
decreases as the distance between a point on the surface of the object and the point where the
reflection of the light source would be formed if the surface was a perfect mirror increases. This idea
is illustrated in the following series of images. On the left you can see the reflection of a small light
bulb by a perfect mirror. As we progress to the right, the surface roughness increases. Note how the
reflection's brightness decreases and the light bulb's reflection spreads across a larger area. Note
also that the highlight brightness decreases in intensity as the distance of points on the surface of
the object to the original reflected light position increases.
Figure 4: if the view direction is not perfectly aligned with the mirror direction, we don't see the
reflection of the light source. Though we can take the dot product between the view direction and
the reflection direction and this gives us an indication of how far the two vectors deviate from each
other. This is the principle upping which specular reflections are simulated with the Phong model.

Phong observed that it was possible to simulate this effect somehow by computing the ideal
reflection direction of a light ray incident on the shaded point, and computing the dot product
between this reflected ray and the actual view direction. As we know from the previous lesson, to
see the reflection of a point on the surface of a mirror-like surface, the view direction or light of sight
needs to perfectly coincides with the reflection direction. If these directions are different (even by a
small amount), then the observer won't see the reflection of that point at all. When the two vectors
are the same (when the view direction is parallel to the reflection direction), their dot product is
equal to 1. As the angle between the view direction and the reflection direction increases, the dot
product between the two vectors decreases (and eventually reaches 0).

Specular≈V⋅R.

Where V

is the view direction and R

is equal to:

R=2(N⋅L)N−L.

Figure 5: the shape of (V⋅R)n

for different values of n

.
L

is the incident light direction at P, the shaded point. You can see a plot of this equation in figure 5
(the red curve). Though Phong noticed that the curve has a pretty large shape which in itself would
create a pretty large specular highlight. To solve this problem and shape the specular highlight, he
raised the equation to the power of some value n

(which is often called the specular exponent):

Specular≈(V⋅R)n.

Figure 4 shows the shape that this equation has for different values of n

. The higher the value, the narrower the curve, resulting in smaller, tighter specular highlight. If you
apply this model and render a series of spheres with increasing values for n

, here is what you get:

As you can see, some of these spheres start to look like shiny grey spheres. Though there is a
problem. Since the probability that a micro-facet reflects light toward the viewer decreases as the
roughness of the object increases, the overall brightness of the specular specular highlight should
also decrease with n

. In other words, the larger the highlight, the dimmer it should be. Though this is clearly not the case
in this render. Unfortunately Phong's model is empirical as he notices himself in his thesis, the
numbers n and Ks have no physical meaning. In order to adjust the specular highlight intensity, you
need to tweak the parameter Ks

until you get the desired look.


Here is the code used to compute the images above:

Vec3f castRay(...)
{
...
if (trace(orig, dir, objects, isect)) {
...
switch (isect.hitObject->type) {
case kPhong:
{
Vec3f diffuse = 0, specular = 0;
for (uint32_t i = 0; i < lights.size(); ++i) {
Vec3f lightDir, lightIntensity;
IsectInfo isectShad;
lights[i]->illuminate(hitPoint, lightDir, lightIntensity, isectShad.tNear);

bool vis = !trace(hitPoint + hitNormal * options.bias, -lightDir, objects, isectShad, kShadowRay);

// compute the diffuse component


diffuse += vis * isect.hitObject->albedo * lightIntensity * std::max(0.f, hitNormal.dotProduct(-
lightDir));

// compute the specular component


// what would be the ideal reflection direction for this light ray
Vec3f R = reflect(lightDir, hitNormal);
specular += vis * lightIntensity * std::pow(std::max(0.f, R.dotProduct(-dir)), isect.hitObject->n);
}
hitColor = diffuse * isect.hitObject->Kd + specular * isect.hitObject->Ks;
break;
}
default:
break;
}
}
else {
...
}

return hitColor;
}

Shading/Reflection Models

The model that Phong used to simulate the appearance of shiny material is what we call in CG a
reflection or shading model. The reason why materials look the way they do is often the result of
very complex interactions between light and the microscopic structure of the material objects are
made of. It would be too complicated to simulate these interactions therefore we use mathematical
models to approximate them instead. The Phong model, which is very popular because of its
simplicity, is only one example of such reflection model but a wide variety of other mathematical
models exist. To name just a few: the Blinn-Phong, the Lafortune, the Torrance-Sparrow, the Cook-
Torrance, the Ward anisotropy, the Oren-Nayar model, etc.

The Concept of BRDF and The Rise and Fall of the Phong Model

Figure 6: incoming I

and outgoing V

direction.

As mentioned above what Phong essentially used to simulate the appearance of shiny materials is a
function. This function (which includes a specular and a diffuse compute). This function contains a
certain number of parameters such as n

that can be tweaked to change the appearance of the material, but more importantly, it actually
depends on two variables, the incident light direction (which is used to compute both the diffuse
and specular component) and the view direction (which is used to compute the specular component
only). We could essential write this function as:
fR(ωo,ωi).

Where ωo

and ωi are the angle between the surface normal (N) and the view direction (V) and the surface
normal and the light direction (I) respectively (figure 6). The subscript o

stands for outgoing. In computer graphics, this function is given the fancy name of Bidirectional
Reflectance Distribution Function or in short BRDF. A BRDF is nothing else than a function that
returns the amount of light reflected in the view direction for a given incident light direction:

BRDF(ωo,ωi).

Any of the shading models that we mentioned above, such as the Cook-Torrance of the Oren-Nayar
model are examples of BRDFs. You can also see a BRDF as function that describes how a given object
scatters or reflects light if you prefer. As suggested, this amount of reflected light depends on both
the incident light direction and the view direction. Many BRDFs have been proposed other the years:
some are designed to simulate a specific type of material. For example the Oren-Nayar model is
ideally suited to simulate the appearance of the moon which is not reflecting light exactly light a
diffuse surface would. Some of these models were designed on either principles of optics or just to
fit some physical measurements. Sone other models such as the Phong reflection model are more
empirical. A lesson is devoted to the concept of BRDF alone so don't worry if we just scratch the
surface of the topic for now.

There are good and bad BRDFs. A bad BRDF is essentially one that breaks either one or more of the
three following rules:

 First, a BRDF is a positive function everywhere over the range of valid incoming and
outgoing directions.

 Two, a BRDF is reciprocal. In other words, BRDF(ωo,ωi)=BRDF(ωi,ωo)

 . If you swap the incoming and outgoing direction in the function, the function returns the
same result.

 Finally, a BRDF is energy conserving. What this essentially means is that the BRDF can not
create more light than it receives (well unless the surface is itself emissive but this is a
special case). Overall an object can not reflect more light than the amount of light incident
on its surface. A BRDF should naturally follow the same rule.

A good BRDF, is a BRDF that complies to these three rules. The problem with the Phong model, is
that it is essentially not energy conserving. It would be too long to demonstrate this in this lesson,
though if you read the lesson on BRDF [link], you will learn why. In general a few factors contribute
to making a BRDF useful and good. It needs to be physically accurate, it needs to be accurate, and it
needs to be computationally efficient (speed and memory consumption should both be considered
here). The Phong model is not physically correct but is computationally efficient and compact which
is why it was very popular for many years. The Phong model while considered still useful to teach
people the basics of shading, is not really used anymore today and is replaced by more recent and
more physically correct models.
The Phong model is also very useful to learn about more advanced rendering techniques. Its
mathematical simplicity makes it an ideal candidate to learn and teach about importance sampling
for instance. Check the lesson on importance sampling in the next section if you want to learn more
about this topic.

Note also that when used in conjunction with delta lights as opposed to area lights the result of the
specular reflection looks nice but isn't physically accurate (beside the fact that the model is not
energy conserving). The specular reflection being essentially a blurred reflection of a light source,
the size of that reflection depends on the size of the source and its distance to the object (the closer
the object to the surface, the larger the reflection). Since delta lights have no size by definition, the
size of the specular reflection can't be physically accurate. The solution to this problem is to use area
lights. You can find more information on this topic in the next section.

What Do You Mean by Specular or Diffuse Lobes?


Figure 7: diffuse and specular lobes. To simulate the appearance of complex materials you often
need to combine several lobes together (like for example one diffuse lobe and 2 to 3 specular lobes
with different specular exponent and weight).

We will finish this chapter on the concept of specular lobe which you may have head or read about.
When a surface is rough, it reflects light in directions slightly different from the perfect reflection
direction but centred around it. You can draw or visualise this process by drawing some sort of
elongated shape centred around the reflection direction as shown in figure 6. This represents in a
way the set of possible directions that the surface reflects light into. This what we call in CG, the lobe
of the specular reflection. In figure 7 we represented this lobe as two-dimension shape but the
shape should actually be three-dimensional. The shape of this lobe changes with the incoming light
direction. This is a property that most materials have. They have a unique way of reflecting light for
each possible incoming light direction. This lobe or the shape of this lobe can be quite complex for
real material. We can acquire it using an instrument called a gonioreflectometer. The function of this
instrument is to measure how much light a given material reflects in every possible direction above
its surface for a given incoming light direction. The shape of the resulting three-dimensional data
totally depends on the material property. If the material is more diffuse the result will look like a
hemisphere. If the material is more specular, there will be a long elongated lobe oriented about the
reflection direction. Data acquired from real material is very useful to either derived mathematical
models that can be used to simulate a given material or validate the accuracy of a given BRDF model.
If the BRDF model behaves like the measured data (it reflects light the same way) then the model is
a good candidate for simulating the appearance of the measured material. Note that when you look
at the reflectance function of measure materials, you can see that they often have more than one
lobe. In CG, we simulate this effect by combing several lobes with different parameters and weight.
For example in figure 7, in the bottom figure we combined a diffuse and a specular lobe together.
The resulting material should look both shiny and diffuse like with the Phong model. More
information on this topic can be found in the next section.

And What About Other Types of Material


Figure 8: for BRDFs, we assume that the point where the light ray strikes the surface is the same
from the point where the same ray is reflected by the object. For translucent objects, these two
points are different. A light ray can strike the surface in point x, travel through the material the
object is made of and then exit the object in point x' which is some distance away from x.

To the contrary of what it may seem, there is only a few types of materials. As mentioned earlier,
objects' appearance is sometimes complex but only requires to combine different lobes together.
The process by which we can find what the recipe for mixing these lobes together should be is
generally complex, but the equations for creating the lobes themselves, are even the law of
reflection or the Snell's law are themselves pretty simple. As mentioned in the previous lessons, we
generally divide materials in two broad categories: dielectric and conductors. Conductors are
essentially metals: they conduct electricity. In opposition, dielectrics are insulators. This includes
plastic, pure water, glass, wood, rubber, wax, etc. Conductors (gold, aluminium, etc.) are essentially
reflective and their appearance can easily be simulated using the reflection law. Though remember
that you should use a different Fresnel equation for dielectrics and conductors. The appearance of
many dielectrics also varies considerably. Water wood, plastic and wax have different appearances.
Water and glass can be simulated using a combination of reflection, transmission and Fresnel. Wood
can essentially be simulated using diffuse. Plastic can be simulated using a combination of diffuse
and specular (the Phong model can be used to simulate plastic as it combines both components).
Simulating wax is a slightly different problem because wax is translucent. In all the examples we
studied so far, we considered that the point where a light ray strikes the surface and the point on
the object where the same light is reflected back into the environment are one and the same. BRDFs
are designed on the assumption that this is indeed the case. Though in the case of translucent
materials, when a light ray strikes the surface on a object in a point x, it then generally travels
though the material the object is made of and then eventually leaves the object in a different point
x'. When the ray enters the objects, it is scattered one or multiples times by the structures that the
material is made of. The ray generally follows some sort of random walk until it eventually leaves the
object in some random position. BRDFs can't be used to simulate that sort of objects. We need
another kind of model called a BSSRDF. This complex acronym stands for Bidirectional Scattering-
Surface Reflectance Distribution Function. The phenomenon is also known as subsurface scattering.
Check the next section, to learn more about BSSRDF and simulating the appearance of translucent
materials.

Should I Tint Specular Reflections with the Objects' Color?

This question is often asked by CG artists but even the most experimented ones sometimes fail
knowing what the exact answer should be. The answer to this question is

if the material is a dielectric, in other words, if it is not a conductor/metal. A specular reflection is


only a reflection of a light source off of the surface of an object. Therefore the color of that reflected
light should be the same than the color of the light emitted by the light source. If the light source
emits red light, then the specular highlight should be red. If the light source emits white light, then
the specular highlight should be white. Never, ever tint a specular reflection especially with the color
of the object. If the object is yellow and the light being reflected is white, the specular reflection
won't be yellow. It will be white.

There is an exception to this rule though with metals. Some metals have a color (bronze, gold,
copper) and though are purely reflective. Metals change the color of the specular reflections with
their own color if you wish. For gold for example, it is a yellow color. For copper it is some dark
orange-red color, etc. If you want to simulate the appearance of gold, you have to multiply the
reflected light by the metal's color: yellow. Though if the metal is covered with a layer of paint, then
what you simulate the appearance of is not the metal anymore but the paint layer, which on its own
is a dielectric. Thus in this case, you should not tint specular reflections.

The Phong Model, Introduction to the Concepts of Shader, Reflection Models and BRDF

This project contains the following files (right-click files you'd like to download):

phong.cppgeometry.h

A simple program to demonstrate some basic shading techniques

Instructions to compile this program:


Download the raytracetransform.cpp, geometry.h and teapot.geo file to a folder. Open a
shell/terminal, and run the following command where the files are saved: c++ -o shading
shading.cpp -std=c++11 -O3 Run with: ./shading. Open the file ./out.png in Photoshop or any
program reading PPM files.

#include <cstdio>
#include <cstdlib>
#include <memory>
#include <vector>
#include <utility>
#include <cstdint>
#include <iostream>
#include <fstream>
#include <cmath>
#include <sstream>
#include <chrono>
#include "geometry.h"

static const float kInfinity = std::numeric_limits<float>::max();


static const float kEpsilon = 1e-8;
static const Vec3f kDefaultBackgroundColor = Vec3f(0.235294, 0.67451, 0.843137);
template <> const Matrix44f Matrix44f::kIdentity = Matrix44f();

inline
float clamp(const float &lo, const float &hi, const float &v)
{ return std::max(lo, std::min(hi, v)); }

inline
float deg2rad(const float &deg)
{ return deg * M_PI / 180; }

inline
Vec3f mix(const Vec3f &a, const Vec3f& b, const float &mixValue)
{ return a * (1 - mixValue) + b * mixValue; }

struct Options
{
uint32_t width = 640;
uint32_t height = 480;
float fov = 90;
Vec3f backgroundColor = kDefaultBackgroundColor;
Matrix44f cameraToWorld;
float bias = 0.0001;
uint32_t maxDepth = 5;
};

enum MaterialType { kPhong };

class Object
{
public:

Setting up the object-to-world and world-to-object matrix

Object(const Matrix44f &o2w) : objectToWorld(o2w), worldToObject(o2w.inverse()) {}


virtual ~Object() {}
virtual bool intersect(const Vec3f &, const Vec3f &, float &, uint32_t &, Vec2f &) const = 0;
virtual void getSurfaceProperties(const Vec3f &, const Vec3f &, const uint32_t &, const Vec2f &,
Vec3f &, Vec2f &) const = 0;
Matrix44f objectToWorld, worldToObject;
MaterialType type = kPhong;
Vec3f albedo = 0.18;
float Kd = 0.8; // phong model diffuse weight
float Ks = 0.2; // phong model specular weight
float n = 10; // phong specular exponent
};

Compute the roots of a quadratic equation

bool solveQuadratic(const float &a, const float &b, const float &c, float &x0, float &x1)
{
float discr = b * b - 4 * a * c;
if (discr < 0) return false;
else if (discr == 0) {
x0 = x1 = - 0.5 * b / a;
}
else {
float q = (b > 0) ?
-0.5 * (b + sqrt(discr)) :
-0.5 * (b - sqrt(discr));
x0 = q / a;
x1 = c / q;
}

return true;
}

Sphere class. A sphere type object

class Sphere : public Object


{
public:
Sphere(const Matrix44f &o2w, const float &r) : Object(o2w), radius(r), radius2(r *r)
{ o2w.multVecMatrix(Vec3f(0), center); }

Ray-sphere intersection test

bool intersect(
const Vec3f &orig,
const Vec3f &dir,
float &tNear,
uint32_t &triIndex, // not used for sphere
Vec2f &uv) const // not used for sphere
{
float t0, t1; // solutions for t if the ray intersects
// analytic solution
Vec3f L = orig - center;
float a = dir.dotProduct(dir);
float b = 2 * dir.dotProduct(L);
float c = L.dotProduct(L) - radius2;
if (!solveQuadratic(a, b, c, t0, t1)) return false;

if (t0 > t1) std::swap(t0, t1);

if (t0 < 0) {
t0 = t1; // if t0 is negative, let's use t1 instead
if (t0 < 0) return false; // both t0 and t1 are negative
}

tNear = t0;

return true;
}

Set surface data such as normal and texture coordinates at a given point on the surface

void getSurfaceProperties(
const Vec3f &hitPoint,
const Vec3f &viewDirection,
const uint32_t &triIndex,
const Vec2f &uv,
Vec3f &hitNormal,
Vec2f &hitTextureCoordinates) const
{
hitNormal = hitPoint - center;
hitNormal.normalize();
// In this particular case, the normal is simular to a point on a unit sphere
// centred around the origin. We can thus use the normal coordinates to compute
// the spherical coordinates of Phit.
// atan2 returns a value in the range [-pi, pi] and we need to remap it to range [0, 1]
// acosf returns a value in the range [0, pi] and we also need to remap it to the range [0, 1]
hitTextureCoordinates.x = (1 + atan2(hitNormal.z, hitNormal.x) / M_PI) * 0.5;
hitTextureCoordinates.y = acosf(hitNormal.y) / M_PI;
}
float radius, radius2;
Vec3f center;
};

bool rayTriangleIntersect(
const Vec3f &orig, const Vec3f &dir,
const Vec3f &v0, const Vec3f &v1, const Vec3f &v2,
float &t, float &u, float &v)
{
Vec3f v0v1 = v1 - v0;
Vec3f v0v2 = v2 - v0;
Vec3f pvec = dir.crossProduct(v0v2);
float det = v0v1.dotProduct(pvec);

// ray and triangle are parallel if det is close to 0


if (fabs(det) < kEpsilon) return false;

float invDet = 1 / det;

Vec3f tvec = orig - v0;


u = tvec.dotProduct(pvec) * invDet;
if (u < 0 || u > 1) return false;

Vec3f qvec = tvec.crossProduct(v0v1);


v = dir.dotProduct(qvec) * invDet;
if (v < 0 || u + v > 1) return false;

t = v0v2.dotProduct(qvec) * invDet;

return (t > 0) ? true : false;


}

class TriangleMesh : public Object


{
public:
// Build a triangle mesh from a face index array and a vertex index array
TriangleMesh(
const Matrix44f &o2w,
const uint32_t nfaces,
const std::unique_ptr<uint32_t []> &faceIndex,
const std::unique_ptr<uint32_t []> &vertsIndex,
const std::unique_ptr<Vec3f []> &verts,
std::unique_ptr<Vec3f []> &normals,
std::unique_ptr<Vec2f []> &st) :
Object(o2w),
numTris(0)
{
uint32_t k = 0, maxVertIndex = 0;
// find out how many triangles we need to create for this mesh
for (uint32_t i = 0; i < nfaces; ++i) {
numTris += faceIndex[i] - 2;
for (uint32_t j = 0; j < faceIndex[i]; ++j)
if (vertsIndex[k + j] > maxVertIndex)
maxVertIndex = vertsIndex[k + j];
k += faceIndex[i];
}
maxVertIndex += 1;

// allocate memory to store the position of the mesh vertices


P = std::unique_ptr<Vec3f []>(new Vec3f[maxVertIndex]);
for (uint32_t i = 0; i < maxVertIndex; ++i) {

Transforming vertices to world space

objectToWorld.multVecMatrix(verts[i], P[i]);
}

// allocate memory to store triangle indices


trisIndex = std::unique_ptr<uint32_t []>(new uint32_t [numTris * 3]);
uint32_t l = 0;
N = std::unique_ptr<Vec3f []>(new Vec3f[numTris * 3]);
sts = std::unique_ptr<Vec2f []>(new Vec2f[numTris * 3]);

Computing the transpse of the object-to-world inverse matrix

Matrix44f transformNormals = worldToObject.transpose();


// generate the triangle index array and set normals and st coordinates
for (uint32_t i = 0, k = 0; i < nfaces; ++i) { // for each face
for (uint32_t j = 0; j < faceIndex[i] - 2; ++j) { // for each triangle in the face
trisIndex[l] = vertsIndex[k];
trisIndex[l + 1] = vertsIndex[k + j + 1];
trisIndex[l + 2] = vertsIndex[k + j + 2];

Transforming normals

transformNormals.multDirMatrix(normals[k], N[l]);
transformNormals.multDirMatrix(normals[k + j + 1], N[l + 1]);
transformNormals.multDirMatrix(normals[k + j + 2], N[l + 2]);
N[l].normalize();
N[l + 1].normalize();
N[l + 2].normalize();
sts[l] = st[k];
sts[l + 1] = st[k + j + 1];
sts[l + 2] = st[k + j + 2];
l += 3;
}
k += faceIndex[i];
}
}
// Test if the ray interesests this triangle mesh
bool intersect(const Vec3f &orig, const Vec3f &dir, float &tNear, uint32_t &triIndex, Vec2f &uv)
const
{
uint32_t j = 0;
bool isect = false;
for (uint32_t i = 0; i < numTris; ++i) {
const Vec3f &v0 = P[trisIndex[j]];
const Vec3f &v1 = P[trisIndex[j + 1]];
const Vec3f &v2 = P[trisIndex[j + 2]];
float t = kInfinity, u, v;
if (rayTriangleIntersect(orig, dir, v0, v1, v2, t, u, v) && t < tNear) {
tNear = t;
uv.x = u;
uv.y = v;
triIndex = i;
isect = true;
}
j += 3;
}

return isect;
}
void getSurfaceProperties(
const Vec3f &hitPoint,
const Vec3f &viewDirection,
const uint32_t &triIndex,
const Vec2f &uv,
Vec3f &hitNormal,
Vec2f &hitTextureCoordinates) const
{
if (smoothShading) {
// vertex normal
const Vec3f &n0 = N[triIndex * 3];
const Vec3f &n1 = N[triIndex * 3 + 1];
const Vec3f &n2 = N[triIndex * 3 + 2];
hitNormal = (1 - uv.x - uv.y) * n0 + uv.x * n1 + uv.y * n2;
}
else {
// face normal
const Vec3f &v0 = P[trisIndex[triIndex * 3]];
const Vec3f &v1 = P[trisIndex[triIndex * 3 + 1]];
const Vec3f &v2 = P[trisIndex[triIndex * 3 + 2]];
hitNormal = (v1 - v0).crossProduct(v2 - v0);
}

// doesn't need to be normalized as the N's are normalized but just for safety
hitNormal.normalize();
// texture coordinates
const Vec2f &st0 = sts[triIndex * 3];
const Vec2f &st1 = sts[triIndex * 3 + 1];
const Vec2f &st2 = sts[triIndex * 3 + 2];
hitTextureCoordinates = (1 - uv.x - uv.y) * st0 + uv.x * st1 + uv.y * st2;
}
// member variables
uint32_t numTris; // number of triangles
std::unique_ptr<Vec3f []> P; // triangles vertex position
std::unique_ptr<uint32_t []> trisIndex; // vertex index array
std::unique_ptr<Vec3f []> N; // triangles vertex normals
std::unique_ptr<Vec2f []> sts; // triangles texture coordinates
bool smoothShading = true; // smooth shading by default
};

TriangleMesh* loadPolyMeshFromFile(const char *file, const Matrix44f &o2w)


{
std::ifstream ifs;
try {
ifs.open(file);
if (ifs.fail()) throw;
std::stringstream ss;
ss << ifs.rdbuf();
uint32_t numFaces;
ss >> numFaces;
std::unique_ptr<uint32_t []> faceIndex(new uint32_t[numFaces]);
uint32_t vertsIndexArraySize = 0;
// reading face index array
for (uint32_t i = 0; i < numFaces; ++i) {
ss >> faceIndex[i];
vertsIndexArraySize += faceIndex[i];
}
std::unique_ptr<uint32_t []> vertsIndex(new uint32_t[vertsIndexArraySize]);
uint32_t vertsArraySize = 0;
// reading vertex index array
for (uint32_t i = 0; i < vertsIndexArraySize; ++i) {
ss >> vertsIndex[i];
if (vertsIndex[i] > vertsArraySize) vertsArraySize = vertsIndex[i];
}
vertsArraySize += 1;
// reading vertices
std::unique_ptr<Vec3f []> verts(new Vec3f[vertsArraySize]);
for (uint32_t i = 0; i < vertsArraySize; ++i) {
ss >> verts[i].x >> verts[i].y >> verts[i].z;
}
// reading normals
std::unique_ptr<Vec3f []> normals(new Vec3f[vertsIndexArraySize]);
for (uint32_t i = 0; i < vertsIndexArraySize; ++i) {
ss >> normals[i].x >> normals[i].y >> normals[i].z;
}
// reading st coordinates
std::unique_ptr<Vec2f []> st(new Vec2f[vertsIndexArraySize]);
for (uint32_t i = 0; i < vertsIndexArraySize; ++i) {
ss >> st[i].x >> st[i].y;
}

return new TriangleMesh(o2w, numFaces, faceIndex, vertsIndex, verts, normals, st);


}
catch (...) {
ifs.close();
}
ifs.close();

return nullptr;
}

Light base class

class Light
{
public:
Light(const Matrix44f &l2w, const Vec3f &c = 1, const float &i = 1) : lightToWorld(l2w), color(c),
intensity(i) {}
virtual ~Light() {}
virtual void illuminate(const Vec3f &P, Vec3f &, Vec3f &, float &) const = 0;
Vec3f color;
float intensity;
Matrix44f lightToWorld;
};

Distant light

class DistantLight : public Light


{
Vec3f dir;
public:
DistantLight(const Matrix44f &l2w, const Vec3f &c = 1, const float &i = 1) : Light(l2w, c, i)
{
l2w.multDirMatrix(Vec3f(0, 0, -1), dir);
dir.normalize(); // in case the matrix scales the light
}
void illuminate(const Vec3f &P, Vec3f &lightDir, Vec3f &lightIntensity, float &distance) const
{
lightDir = dir;
lightIntensity = color * intensity;
distance = kInfinity;
}
};

Point light

class PointLight : public Light


{
Vec3f pos;
public:
PointLight(const Matrix44f &l2w, const Vec3f &c = 1, const float &i = 1) : Light(l2w, c, i)
{ l2w.multVecMatrix(Vec3f(0), pos); }
// P: is the shaded point
void illuminate(const Vec3f &P, Vec3f &lightDir, Vec3f &lightIntensity, float &distance) const
{
lightDir = (P - pos);
float r2 = lightDir.norm();
distance = sqrt(r2);
lightDir.x /= distance, lightDir.y /= distance, lightDir.z /= distance;
// avoid division by 0
lightIntensity = color * intensity / (4 * M_PI * r2);
}
};

enum RayType { kPrimaryRay, kShadowRay };

struct IsectInfo
{
const Object *hitObject = nullptr;
float tNear = kInfinity;
Vec2f uv;
uint32_t index = 0;
};

bool trace(
const Vec3f &orig, const Vec3f &dir,
const std::vector<std::unique_ptr<Object>> &objects,
IsectInfo &isect,
RayType rayType = kPrimaryRay)
{
isect.hitObject = nullptr;
for (uint32_t k = 0; k < objects.size(); ++k) {
float tNear = kInfinity;
uint32_t index = 0;
Vec2f uv;
if (objects[k]->intersect(orig, dir, tNear, index, uv) && tNear < isect.tNear) {
isect.hitObject = objects[k].get();
isect.tNear = tNear;
isect.index = index;
isect.uv = uv;
}
}

return (isect.hitObject != nullptr);


}

Compute reflection direction

Vec3f reflect(const Vec3f &I, const Vec3f &N)


{
return I - 2 * I.dotProduct(N) * N;
}

Compute refraction direction

Vec3f refract(const Vec3f &I, const Vec3f &N, const float &ior)
{
float cosi = clamp(-1, 1, I.dotProduct(N));
float etai = 1, etat = ior;
Vec3f n = N;
if (cosi < 0) { cosi = -cosi; } else { std::swap(etai, etat); n= -N; }
float eta = etai / etat;
float k = 1 - eta * eta * (1 - cosi * cosi);
return k < 0 ? 0 : eta * I + (eta * cosi - sqrtf(k)) * n;
}

Evaluate Fresnel equation (ration of reflected light for a given incident direction and surface normal)

void fresnel(const Vec3f &I, const Vec3f &N, const float &ior, float &kr)
{
float cosi = clamp(-1, 1, I.dotProduct(N));
float etai = 1, etat = ior;
if (cosi > 0) { std::swap(etai, etat); }
// Compute sini using Snell's law
float sint = etai / etat * sqrtf(std::max(0.f, 1 - cosi * cosi));
// Total internal reflection
if (sint >= 1) {
kr = 1;
}
else {
float cost = sqrtf(std::max(0.f, 1 - sint * sint));
cosi = fabsf(cosi);
float Rs = ((etat * cosi) - (etai * cost)) / ((etat * cosi) + (etai * cost));
float Rp = ((etai * cosi) - (etat * cost)) / ((etai * cosi) + (etat * cost));
kr = (Rs * Rs + Rp * Rp) / 2;
}
// As a consequence of the conservation of energy, transmittance is given by:
// kt = 1 - kr;
}

inline float modulo(const float &f)


{
return f - std::floor(f);
}

Vec3f castRay(
const Vec3f &orig, const Vec3f &dir,
const std::vector<std::unique_ptr<Object>> &objects,
const std::vector<std::unique_ptr<Light>> &lights,
const Options &options,
const uint32_t & depth = 0)
{
if (depth > options.maxDepth) return options.backgroundColor;
Vec3f hitColor = 0;
IsectInfo isect;
if (trace(orig, dir, objects, isect)) {

Evaluate surface properties (P, N, texture coordinates, etc.)

Vec3f hitPoint = orig + dir * isect.tNear;


Vec3f hitNormal;
Vec2f hitTexCoordinates;
isect.hitObject->getSurfaceProperties(hitPoint, dir, isect.index, isect.uv, hitNormal,
hitTexCoordinates);
switch (isect.hitObject->type) {

Simulate diffuse object

546
547

case kPhong:
{

Light loop (loop over all lights in the scene and accumulate their contribution)
Vec3f diffuse = 0, specular = 0;
for (uint32_t i = 0; i < lights.size(); ++i) {
Vec3f lightDir, lightIntensity;
IsectInfo isectShad;
lights[i]->illuminate(hitPoint, lightDir, lightIntensity, isectShad.tNear);

bool vis = !trace(hitPoint + hitNormal * options.bias, -lightDir, objects, isectShad, kShadowRay);

// compute the diffuse component


diffuse += vis * isect.hitObject->albedo * lightIntensity * std::max(0.f, hitNormal.dotProduct(-
lightDir));

// compute the specular component


// what would be the ideal reflection direction for this light ray
Vec3f R = reflect(lightDir, hitNormal);
specular += vis * lightIntensity * std::pow(std::max(0.f, R.dotProduct(-dir)), isect.hitObject->n);
}
hitColor = diffuse * isect.hitObject->Kd + specular * isect.hitObject->Ks;
//std::cerr << hitColor << std::endl;
break;
}
default:
break;
}
}
else {
hitColor = options.backgroundColor;
}

return hitColor;
}

The main render function. This where we iterate over all pixels in the image, generate primary rays
and cast these rays into the scene. The content of the framebuffer is saved to a file.

void render(
const Options &options,
const std::vector<std::unique_ptr<Object>> &objects,
const std::vector<std::unique_ptr<Light>> &lights)
{
std::unique_ptr<Vec3f []> framebuffer(new Vec3f[options.width * options.height]);
Vec3f *pix = framebuffer.get();
float scale = tan(deg2rad(options.fov * 0.5));
float imageAspectRatio = options.width / (float)options.height;
Vec3f orig;
options.cameraToWorld.multVecMatrix(Vec3f(0), orig);
auto timeStart = std::chrono::high_resolution_clock::now();
for (uint32_t j = 0; j < options.height; ++j) {
for (uint32_t i = 0; i < options.width; ++i) {
// generate primary ray direction
float x = (2 * (i + 0.5) / (float)options.width - 1) * imageAspectRatio * scale;
float y = (1 - 2 * (j + 0.5) / (float)options.height) * scale;
Vec3f dir;
options.cameraToWorld.multDirMatrix(Vec3f(x, y, -1), dir);
dir.normalize();
*(pix++) = castRay(orig, dir, objects, lights, options);
}
fprintf(stderr, "\r%3d%c", uint32_t(j / (float)options.height * 100), '%');
}
auto timeEnd = std::chrono::high_resolution_clock::now();
auto passedTime = std::chrono::duration<double, std::milli>(timeEnd - timeStart).count();
fprintf(stderr, "\rDone: %.2f (sec)\n", passedTime / 1000);

// save framebuffer to file


std::ofstream ofs;
ofs.open("out.ppm");
ofs << "P6\n" << options.width << " " << options.height << "\n255\n";
for (uint32_t i = 0; i < options.height * options.width; ++i) {
char r = (char)(255 * clamp(0, 1, framebuffer[i].x));
char g = (char)(255 * clamp(0, 1, framebuffer[i].y));
char b = (char)(255 * clamp(0, 1, framebuffer[i].z));
ofs << r << g << b;
}
ofs.close();
}

In the main function of the program, we create the scene (create objects and lights) as well as set
the options for the render (image widht and height, maximum recursion depth, field-of-view, etc.).
We then call the render function().

int main(int argc, char **argv)


{
// loading gemetry
std::vector<std::unique_ptr<Object>> objects;
// lights
std::vector<std::unique_ptr<Light>> lights;
Options options;

// aliasing example
options.fov = 36.87;
options.width = 1024;
options.height = 747;
options.cameraToWorld[3][2] = 12;
options.cameraToWorld[3][1] = 1;

Matrix44f xform;
xform[0][0] = 1;
xform[1][1] = 1;
xform[2][2] = 1;
TriangleMesh *mesh = loadPolyMeshFromFile("./plane.geo", xform);
if (mesh != nullptr) {
mesh->smoothShading = false;
objects.push_back(std::unique_ptr<Object>(mesh));
}

float w[5] = {0.04, 0.08, 0.1, 0.15, 0.2};


for (int i = -4, n = 2, k = 0; i <= 4; i+= 2, n *= 5, k++) {
Matrix44f xformSphere;
xformSphere[3][0] = i;
xformSphere[3][1] = 1;
Sphere *sph = new Sphere(xformSphere, 0.9);
sph->n = n;
sph->Ks = w[k];
objects.push_back(std::unique_ptr<Object>(sph));
}

Matrix44f l2w(11.146836, -5.781569, -0.0605886, 0, -1.902827, -3.543982, -11.895445, 0, 5.459804,


10.568624, -4.02205, 0, 0, 0, 0, 1);
lights.push_back(std::unique_ptr<Light>(new DistantLight(l2w, 1, 5)));

// finally, render
render(options, objects, lights);

return 0;
}

Anda mungkin juga menyukai